 Good morning folks. Today we are joined by Alexander Bokovi. He's a senior principal software engineer at Red Hat working on things security related and identity management. Today he'll be talking about how the next release of Adora 39 is planning on providing basic functionality of passwordless authentication. Over to you Alexander. Yes, thank you. So we don't have actually anyone in the room yet because the keynote is still happening. So it will be a bit awkward. So I'm as if actually there are people coming. Great. I was about to start saying that I'm doing a remote presentation. Yeah, but the first came in just at the beginning of the talk. So this talk is a variation on few talks that I gave earlier this year. And I gave a talk at post them in February and then in May at the Samba XP conference. And this is the third one and it's like it's a progress talk because the work is done in the upstreams and in distributions to implement all of what I will be talking about. And that one is kind of the things are changing. The target is also changing over time. So let me start with a bit of description. Who am I? So I'm working at Red Hat on all things related to identity management. Basically anything around free APS, SSD, Samba and Kerberos, that's me or my team. I met Red Hat for 12 years now. So there's a lot of history in the work that I will be describing today. And in fact we kind of going back with all of this actually 30, almost 35 years into 80s. And I will be talking about that one at the progress we have today in free APS and related components in the future. So what will happen in Fedora 39 and later because it's still not everything is ready there. So past. If we look into the past and past is literally like last 30 years for sure. The assumptions that we had on the operating system level were that you really have more or less compatible authentication mechanisms everywhere. So when you try to log into the system you expect that the system actually is configured the same way you expect it to be used. So if it's not a system that uses the centralized identity management, then you have to make sure that it has all the same user databases, all the same passwords and you really want to be able to transfer this state of authentication from one system to another. So people went long way trying to solve this problem and for absolute majority of them only two of those survived. It's typically use of the some sort of authentication agents that store you previously authenticated kind of credentials and consult over the network. This is typical case for SSH for example. When you do login over SSH you kind of have a chance to use authentication agent on your original system and get access to it piped through. There was recently a CVE actually in SSH, an open SSH that exploited this nature of the authentication agent. By triggering a lot of literally anything into the authentication agent on your system like on your laptop while you're trying to log in into the remote server. So these things security-wise they have ups and downs even though certain implementations exist like for decades. And a typical approach is literally you log in on your let's say desktop or laptop. You unlock your secrets manager be it's authentication agent for SSH or it's your desktop like non-desktop and then you use these secrets to use the authentication agent and then consume resources based on the secrets. There's bunch of application specific issues literally you can choose any area you will find problems related to that but the most important problem is when you get to go and access resources that out of the scope of the protocol. So you want SSH agent to be used or SSH keys to be used but you need to access file system go find a file system implementation like NFS or SMB or any other that actually uses SSH as your authentication method now. So you have to use something else there and typically it's either password or use of things like Kerberos because Kerberos is another way of transferring your authentication state using the mechanism of issue in service tickets and then presenting this service tickets between the applications that operate on the client side on the server side. So it's important that in the Kerberos case a long time ago I think more than 30 years ago the decision was made to actually split a state we have the initial authentication from the actual use of the authenticated kind of concept. So initial authentication can happen using whatever method you and your key distribution center agree with. And over years these methods were developed in like maybe 20 years ago so the use of smart cards became more or less standard way of password less so to say authentication in Kerberos. And that's the one used for example by the majority of governmental customers. That simply means that you use a smart card device and you use PKACS 11 smart cards to prove your identity. ADC to issue a ticket granting ticket that contains some information about you that you can use later to ask for service tickets to a particular services. And that one is actually a fairly good mechanism because once you get the ticket you present it every time you need to talk to a new service and until this ticket is valid you don't need to reacquire your initial credentials. That builds up the idea of single sign on because you really signed once. Whether this signed once happens at the login screen on the desktop or it happens through some other mechanism. It's secondary. It's the part of the user experience that you build up there. But this whole thing works like fairly well. And the interesting part is that in free APA and in Fedora specifically we do have this for like almost 10 years. If we look at the complex things that you can do with this is what free APA is doing when you enable trust to Active Directory. This is the typical authentication flow that you would see there. This is from actually well documentation. And it just shows that when you're trying to log in over SSH you get ask it password that password. Well if there is a password need to use there it goes into the PAM and through the PAM stack SSSD kind of captures this and acquires a Kerberos ticket for you and on the login to the shell you will get that Kerberos ticket which you can use later. But what's happening behind is a whole machinery of different components being involved. And one of them is actually let me try to show here is P11 child. That's the one that actually goes with the smart card reader if the one exists and configure it and so on. So these are typical things. You can see that the smart card reader is more or less on the client side. You don't need to get it connected to the server side and so on. But if we look into the past actually in 2016 I gave sort of similar presentations for the enterprise desktop with GNOME and free APA. Also at the flock I think it was Krakow in Krakow. And we use their free APA to login into GNOME environment and we work it with GNOME guys to extend GNOME to support all of this. And this look at like the whole thing look at like this. This is I think a VM at that point. So I'm logging in with the password, just a password to obtain a ticket. You can see I have a ticket. This is like 2016. And then I use that ticket to come up with the VPN connection. And that VPN connection was obtained using Kerberos credentials. So you got the service ticket to my open connect VPN endpoint that side. And the next step was actually using the application which is IPA management tool to add, since this user can use OTP, to add the Ubiqui based second factor. And instead of being asked for the password, be asked for actual first factor and second factor from this Ubiqui. As soon as I added that Ubiqui, I can now lock the screen and next attempt to login will ask me for factors. So now this is not just a password. It's sort of password less in the sense that whatever was your password becomes one of the factors to enter. But the same way you can do smart cards and so on. It's kind of the thing that exists in FreeAPA and in Fedora since 2015 also. And this is a new ticket now, not the old one. You can see the, I need to stop this because this is the next one. And you can use that for a lot of stuff. So we did a bunch of integrations. We did the integration with the GNOME accounts so that if you have this Kerberos ticket obtained through the login, whatever method was there, then GNOME accounts tracks it and renews if needed. Then GNOME browser at that point and later Firefox and later Chrome all obtained the functionality to support single sign-on using Kerberos. That came in 2015, 2016 together with Fedora and others. And yeah, that was kind of the base for all of this work. You could use this everywhere whether it's in the at work, like enterprise or at home. This has enabled many people to go there. What happened though from that time was that we effectively stopped, we as a society, effectively stopped using the infrastructure as a driving factor for how we work. We switched from infrastructure for applications, we switched from infrastructure for people to infrastructure for applications. You don't care whether your laptop is actually enrolled into the work space. You're most likely using some browser to access the resources that are provided by applications on your workplace. Six, seven years ago that wasn't true. You have to have your system enrolled, you use the resources on your domain and so on. The change came actually with the profound use of the mobile devices and actually phones killed everything else in terms of also application development and frameworks and so on. So things became increasingly new main frame-ish in the sense that the browser became a new sort of main frame. You'd be with us in terms of use and access and so on. And for the authentication and most importantly authorization of the applications, people switched to OALF. Two sets of standards and framework of all these flows. And to any organizations what happened is basically if you integrate new applications, you integrate with the OALF. You bring your authentication into the authorization framework provided by the OALF. And this is important part. So, okay, if my laptop is not anymore really on the network, not on the network that is trusted by the domains and so on, then how I do trust all of this? This whole change actually eventually went into the situation that is outlined in so-called zero trust architecture where effectively organizations say and being forced by the governments around the world to say that we don't trust our infrastructure anymore. We have to validate every single thing that happens there. But we also don't trust neither end user devices nor we trust our applications. So it kind of changed the game. And the idea with this is not new one. It's basically a statement of facts. Everybody is doing that in the world and they have to reapply at the boundary where the application is running. Reapply this change of kind of see if the things are right, authorize every single access and so on. So what we get is really things which to be a browser. And this story is actually similar to what we had in 2016 except that in 2016 we dealt with environments where we had captive portals. Even today here you have a captive portal to get on the hotel network. You have to solve this captive portal thing. And in 2016 one of the problems that we outlined was about the same. If I want to log in into my laptop with my enterprise credentials I need to be on the network. But in order to be on the network before I log in I have to solve the captive portal. And I cannot solve it because well it's meaning running untrusted code on a system with pretty much root privileges because of how a system works. This is not Linux only problem. This problem is everywhere. So some time ago I was talking with Microsoft engineers who were implementing live.com and whatever is called now Azure or Entra, IDE, whatever methods that they used to log into Azure and other resources. And they say they had the same issue. Running an unconfined page to log in into the environment before you log in into the machine is a constant pain. And to solve it there is no real solution other than really running isolated, scratched environment. So I try to provide like filtering on the level of what your eyes are supported. But federation as a part of the OAuth space kills it all. So if I want to log in let's say to Red Hat's single sign-on system. I go to SSO.redhat.com and there I say that I want to log in with my Google stuff. So it actually forwards me to the Google page. So from the perspective of the login system like GDM or Windows login system. Now I am dealing with the content that might from the Google forward to another resource that actually handles my authentication if I never gave the credentials to Google when I created Google account. It's like there's simply not possible to solve this in a safe and sane way. So we didn't solve that problem for captive portals because that's exactly the same thing. We did not solve this yet for today's kind of login in this online services thingy. Again I'm talking about the login to the session first and then using the browser. We don't have yet browser running in. And the problem is really the same how to bridge these things together. How we get there. We started with the other side. So looking into the environments we realize that okay you have a browser somewhere. You really have a browser in your device. And if it's an online service that you have to authenticate with then probably you can run it on the other device to login here. So in free IPA we implemented last year and it's available in Fedora 37 and RHEL 8.7 I think and later. You can instruct users to visit the OAuth to visit identity provider and authorize the access. So we did some sort of a magic which is based on the device authorization grant flow. Describe it in the RFC 8628 which is similar to how you authorize your TV to use your YouTube account. Sign into YouTube. Here you sign in into your SSH server but really not into the server you're signing in in your Kerberos environment. Instead of doing this every time you get the Kerberos ticket and then continue using Kerberos ticket. So this looks like this. There's a lot of flow things happening. I will let me show you. I really hope it will work. So this this looks like this. So I use keyclob here as OAuth to identity provider. And this user is authenticated with the password first. So now I don't have a security key which is the fighter to key associated with it. But I want to associate. So I've registered it. You can see I'm using the browser here on the other system. Other system in this case of course these are running on the same system but it doesn't matter. I created this key. So basically web auth and kind of authentication here. And I configured keyclob to use web auth and authentication if the user is there. So when I try to SSH to the system I get asked of this prompt that says hey go to this URI and confirm that you want to access this environment. And as a user I get to login there and use a security key to login. Of course the Firefox asks me to confirm what I want to use this key. And then asks me to confirm that I want to login into the system. So I logged into the system. I got the Kerberos key. So what happened here is I traded web auth and authentication somewhere at some other place into authorization to access the token user info token on this user on that identity provider. And I use that fact as action to issue your ticket granting ticket in Kerberos. So it's starting from here I can use Kerberos ticket for everything else I need because this is the ticket granting ticket. I can trade it for other services. Of course doing this every time is kind of weird. And the most weird part is of course I cannot do this in the login because you see the size of the URI. I cannot really even display it in GDM. It will be cut off at the beginning. So authenticate at HTT and then the rest will be cut off. You wouldn't see nothing basically there. So we need to do something to kind of fix this. Okay. But can we actually go and get away from the networking because this really wants to have this networking thing to happen. Let's see if I can do that. So this is a relatively recent demo that I made at Samba XP where I basically took the FIDAR2 token and used it to login into the GNOME environment. It still cuts off some information. Luckily in this case the important one is to enter a PIN and effectively ask the user to validate their presence by touching the device is what you really need and it's shown. So it works. This is actually Rohite at that point. But you need a special version of FreeAPA which is not in Rohite yet. It's at this point everything is in the FreeAPA upstream already but not released. We are going to release this maybe in a couple of weeks. So this is how I login on this machine as well already. And it works also with the environments where you don't have networking access. The only thing that is needed is to provision local account information that SSSD keeps in the cache about this user but the information comes from the network and so you need to be at least once in the network to get there. So this is good. I get the Kerberos ticket and that's great. It will be kind of configurable the same way we do this configuration in the rest of IPA authentication type. So you can see that you can do the OTP in the same way I did in 2016. You can do smart cards that's PKI in it, public key infrastructure use. You can do external IDP provider like I show it in the demo before this one. Or you can use the FIDO2-based tokens which we call PASCs intentionally because we support anything that the FIDO2 supports and the FIDO2 plans to also support the Bluetooth-based ones and these whatever Android and iOS support but they are not supportable yet, we will get there at some point. And it's also a bit complex scheme but it's built on the same architecture we use for other stuff. So it's a radius backend to Kerberos that simulates certain things. It just uses radius protocol between Kerberos KDC and itself but it doesn't use other radius stuff. So it's kind of a way to extend things. We built up this I think in 2012 or 13 to support the two-factor authentication in IPA for the Kerberos and for the rest but it's also used for all the other stuff. And the same technology can be used also for enabling SambaID. Well, there's dependency on MIT Kerberos so it will not work with Heimdall, that's fine because they don't have a pluggable mechanisms for all these methods. We only have them for MIT Kerberos. And of course right now it doesn't but I mean extending use of IPA infra to plug behind the SambaID is possible because it's just the held up storage of certain bits and then the same utilities can be used to manipulate it. So that's in my plans, again not Red Hat but my upstream plans to have this supported. But then another problem is what to target. So if we talk about this, this is not going to be supported by Microsoft so Windows systems are out of this story. They are not going to work on anything like this. So you can extend authentication packages in Windows and provide your own but reading that for certain things is a bit cumbersome and it's really where you want to have customers because it's a lot of work to do. So targeting Linux systems we can will probably target them. Given that now with the Fedora OpenQA supporting the testing of SambaID we can actually test the whole cycle in OpenQA from the client system login and go in there the same way for both IPA and SambaID once it's in there. That's the beauty of the architecture we have in OpenQA. So for the desktop integration the bigger kind of thing is of course inadequacy of the UI, in this case in GDM but it's really extended to other desktop environments as well. They are inadequate to anything but passwords. The GDM is more advanced here because we work it together with the GNOME folks and they cover a few other methods as well. So for the smart cards for example in 2019 we extended GDM to allow kind of choose which smart card to use for which account to map into and so on. We kind of work now with them to design how all this extends for all of these methods, not just one because we need to cover the login with external identity provider. We need to cover pass keys. For the pass keys some of them like the physically ones they work already so we just need to have a bit of nicer UI but the phone-based pass keys will require scanning QR codes from the screen and that means also enabling this triangle kind of connection between the different services to discover what to negotiate with and so on. So this is like you need to have an infra and if you want to expand this beyond just one desktop environment it needs to be more or less independent of just one. So there's a lot of discussion ongoing. Most of it not public because in the initial steps we have to collaborate tightly over things but some of it already can be seen in the GNOME's GitLab environment. There are some branches, there are tickets open and so on. So let me show how this looks like. This is the proof of concept. It's not a full working thing. Ray Strode who maintains GDM, he wrote a proof of concept for both GDM side and he wrote a special PAM module that kind of inter-operates with it. It's not part of SSSD but it's just a separate module to test the proof of concept thing. So this is a login with the imaginary user that has actual authentication associated with some external identity provider. And when this user tries to login and it gets to see this different dialogue that says, okay, continue to initiate web login and authenticate with an external device and when the user touches these left, right angles, arrows then you proceed in the flow. So this kind of says, okay, you can get back and choose another authentication mechanism and you can go forward. It's already different from what was before. It's not necessarily the final one but we kind of been discussing this with the UX people. So in this case, once you proceed, a request is being made and the new authentication era is generated and converted into the QR code and then once you did all your job, you press the next button and you get login, if you're right. At this point, when you log in, you get the Kerberos ticket issued. Okay, not in this proof of concept, of course, but in the real environment, if you run web this, you get the Kerberos ticket issued. And then again, you can use this for the rest in the browser, in the other applications, mounting your home directory actually and so on. But there are other things. There's a lot of other things. So for example, in the real life, you often have to log in. For example, with this token, you want to log in but you don't have a network. Okay, you can do that. But how you differentiate this case for the user to say that your single sign-on experience will not be there. Yes, you're logged in because you're allowed to log in offline, but we need to tell them, we need to pop up some information, we need to differentiate whether this was a graphical login so that this information, this warning, could be a pop-up somewhere. And we need to show the message in a console if it was an SSH, for example, login and you were unable to get the things in and so on. So there's a lot of UX work and improvement on that one to get these things done. And then this is where the hard thing comes in. It all has to work together. It's not just one project doing their job or two projects or three projects. You have to coordinate how things are delivered and appear together in the distribution. That's how we work with Fedora. Typical, we don't advertise most of this work because, well, coordinating is hard enough. Promising something when every party can sleep and delay their delivery is also impossible. So in the true open source fashion, it's ready when it's done. That's how we work. But we are not the only ones who are doing this. So there's a lot of parallel efforts from overall community. So there are people who are looking at how to get these pass keys, information shared across different execution environments. So how to get them in flat packs without exposing everything. How to get these details back and forth in different places. Most of them is not using Kerberos or really relying on Kerberos. So we are not overlapping with anyone. We kind of get in this puzzle, solve a piece here, a piece there, and then combine these efforts together. So for example, I know that SystemD has some support for FIDA2 integration through LibFIDA2. So we have some hopeful things that the definitions of these pass keys in FreeAPA could eventually be used to sign in the encryption of the disk drive and secure boot and all of these things together. At the same time, we hope, and that would be interesting to see how you can actually get this to pre-boot environment and get this booting happen. You have to cache some data that SSSD provides. You have to use it beyond this kind of network-able environments and so on. So this is a lot of work and it's not like the thing that is the whole experience is delivered in a single release. It may well take several releases to complete all this delivery. So I know that the GNOME parts wouldn't be ready until maybe March next year if we are happy with what we get, which I heard we are not yet. So there is still discussion across us and GNOME developers from other distributions how to get all these things working together. But the basics are there and I'm really excited that actually it works. I do login in my system here with it and I can actually show you it here as a sort of an environment. I locked the system. So this is using PAM for VLock authentication. So now I can unlock the system. It starts blinking on my device because I enabled the user verification. And I did verify it. Oh, sorry. And you probably see that my ticket is old. Oh, this is actually a different one. This is the Red Hat one. Let me switch to mine. It's this 1215. 1215 here. That's the new one. This is also... I'm not on the VPN to my home system, but this Kerberos environment uses so-called KDC proxy mechanism. So it's proxy and over HTTPS to my environment, similar how we use in Fedora for Kerberos login. That's why the login works, but I'm officially offline in terms of IPA and SSD environment. So that thing is here. Let me grab... Okay, this will not work because I don't have the VPN, right? Let me login there. Why don't? Here's this up. So I do have now a VPN connection. I can actually run it. So there is a passkey mapping in my IPA account for... Well, this is my home setup. So it's like... And one of these keys that you can see there are multiple of them registered. I'm just... I'm not digging myself with all this thing. Yeah, so this is the... As far as we are, but really it's enabling you to do all this stuff. One thing I did not say is that I can now use this Kerberos ticket or any Kerberos ticket to actually authenticate not just to any other system, but that's really SSH to other system, right? But I can use this to authenticate as a part of the PAM authentication on the same machine, which makes possible things like if you have a Kerberos ticket, you can do a sudo based not on the password or entering a game into the authorization cycle but with your Kerberos ticket. This is supported in Fedora and RHEL for about two years now. It's in the documentation. You can actually tune SSSD to specify which Kerberos ticket can be used for that. So you can say that the Kerberos ticket obtained with the PASCI is the one that I'm allowed to use for raising sudo. Just password will not work. Just like anything else will not work. Only if I use FIDO2 or I use smart card. And that kind of thing is already in Fedora. I'm asking myself for couple months now that I should write down an article about all of this because many of these things they exist but they are not discoverable. Of course we have this in the Humongous RHEL ID documentation but go find things there. That's another part of the story. But yeah, this is all I have. If you have any questions, I think we have maybe five minutes or so. Thank you. Okay, so I have a few questions. So I'll try to take one question then if somebody wants to ask they can ask and then I'll go back and forth. So for your first demo when you were using the UP key and you did configure the UP key through the, I think it was free IPA. Well this is an OAuth or the token. What kind of token did it use there? Because it wasn't an FIDO2. You're talking about the first demo. Yes. The first demo from 2016 was using UP key with TOTP. Yeah, the TOTP here was just the six digits, right? I don't recall the exact technical term for it. Yeah, yeah, yeah, that was it. How it could configure it because the UP key needs to have the secret for this. Yeah, this is integrated in free IPA. It will configure the UP key. Yeah. You can set up it with like software one or you can set up with UP key and so on. The secret is actually if you don't use the special command that I had this whatever, OTP at UP key if you don't use that one it will print you a QR code and it will print you a secret there that you will use to program your UP key manually. But if you have if you use that command it will use Python bindings to the UP key thing to program the USB device directly. So that's kind of transparent but you need to run that command on the device where you configure it. Thank you. That's kind of that is the thing that exists for like forever now. Maybe you mentioned it. I was a little bit distracted so I apologize in advance if it was mentioned but is it possible to set this up standalone without a full blown free IPA infrastructure? So we are focusing on making this working with free IPA on the first stage. The client side meaning SSSD side only needs to have this information in the local database the local cache that they have the information can be injected into the local database they don't have utilities for that yet there is SSS control but it doesn't have a mode to inject this this is on the plans we will do that the idea is to have a replacement for PAM-UTF thingy completely because from our experience PAM-UTF is actually a bit insecure in terms of the configuration management and you really have to split these things this is another part we need to provide the user utilities to make it nicer for common folk right now it's more usable for the admins who know what they do Excellent. I'm looking forward to some nice blog post or documentation that would be really helpful. Thank you. Other questions? I already scratched one out. For the second demo when we were using key clock with using the key clock interface to configure or fetch a ticket that will be added for the user as far as I understood when you clicked on configure secret does this mean that the free IPA is your main ITP and then it just extended the authentication to a Kerberos server where you configure your Kerberos environment which is the free IPA in this case? This one is all done through the Kerberos and only in free IPA first you cannot do this offline because you have to communicate with both Kerberos KDC and the Kerberos KDC have to communicate with the online identity provider so this requires Kerberos you cannot configure this like standalone. Thank you. Any more questions? I know almost everybody in this room but in case somebody doesn't know me my name is Bisek. I work in Red Hat in the Plumper's team on systemd and I'll be talking about stuff that happened in the last year I submitted a version of this talk for devconf a month and a half ago and I gave the talk I think it wasn't terrible but I also didn't get much feedback from the audience and I asked somebody from our team I don't know if you know him too and he said that it was a good talk maybe like if you're doing large scale infrastructure and because I was talking about immutable images and PCRs and signatures and verification and stuff like that and if you're an individual contributor this is not terribly useful so there are usually second chances in live but I guess in conference talks you do get a second chance every once in a while so this time I'll try to do better and focus on end user features in systemd so if you care about the large scale stuff this is not I'll try to stay away from those topics this time so a few days ago we released systemd 254 and we try to do regular releases our goal is to make 6 releases a year we average at 2.5 so there's still some room for improvement I think we like compared to I don't know 5 years ago we increased the quality quite a bit we do go for a bunch of RC releases we kind of copy the kernel workflow we make an RC and when the RC is made we stop accepting major features and features in general we try to fix bugs until any known regressions in the last release have been removed and this means that the first RC and the final release we block and sometimes this takes quite a while occasionally we just revert things if we cannot fix them so 254 had 3 RCs and we plan to do another release this year another thing that has been improved I think improving the quality and the cross district collaboration is point releases so as soon as any given version is released we create a stable branch for it and start pulling in back ports of commits and I think we are at 251.18 right now and every year we make more of those point releases we have made 29 in 2023 so far and if we keep up this space it will be like 50 this year rounding up but most distros have switched to building from stable races the stable races started out as the patch set that was used in Fedora and then we added tags to it and I think it's good that other distros are reusing this work and actually contributing to it now and the number of open issues in github which is used by systemd upstream is growing well 10% per year or well 12% per year quite a bit but it's also not terrible, I think as long as the project is alive this number will have to grow there's just no way that we will be able to close more issues that are opened but in Fedora this has happened I think this is pretty nice a bunch of people made an effort to clean up the queue in Fedora David Tardon, Yuata Nabe and other people worked on just going through the bugs closing out some so like approximately over the last year we have removed closed half of the bugs in Fedora and especially like the RFIs where they resolve or move upstream I think that's good so let me talk about some specific features so systemd is big on synchronous operations you request a service to be started systemd wants to know when the service has started bringing itself up and wants to know when it's actually ready to serve requests and things have been like this since the beginning and the easiest nicest way to implement this is with type equals notify there are other protocols but if you issuing of notifications using the sd.notify library or the systemd.notify helper from shell scripts or implementing the whole thing in your own code if you want to use the sd.notify library it's a little bit trivial because it's just a text string that is sent over a socket and this works nicely we made it slightly easier to use from shell scripts in the latest release because there's a new exec option so that notify send a notification and the exec something which means that we had was reloads so it is traditional for unix demons to do a reload after getting a signal or another signal but this is asynchronous and if you wanted to do it the right way you actually couldn't use a signal because there would be no notification going the other way so the recommended way was to implement your own binary and make it communicate with the demon do a reload and communicate back and this was very annoying and we figured out it's actually quite easy to do it properly by sending a signal and then having the binary send a notification the other way the problem is that this cannot be introduced in a backwards compatible fashion by default because actually the demon needs to send a notification back so there's a new type called notify reload and systemd will send a sync hub on its own but the demon needs to send back a notification and now we have asynchronous reloads so in general for backwards compatibility all the types that have been there for services are still there but the recommended ways to do things have changed so for services which don't have a start-up phase but just already when they are started don't use type simple, use type exec so the difference is with type simple systemd forks and then one of the children doesn't exec but as soon as the fork happens systemd considers the service to be started in hindsight this is not very useful because exacting of the binary might fail and now in type exec the point of readiness is not the fork but the exec that happens in the child this is much more useful if you use type forking which follows the traditional unix protocol that you do a fork and then another fork to detach from the parent with systemd this is all useless just don't do any forking or exacting use type equals notify or type equals debas so you have a debas service then the point of readiness is where the child acquires the debas name so this is the type equals debas or when it sends a notification type notify or actually you can do one better and switch over to type notify reload because if your service supports reloads and type one is still okay another kind of useful but not very well known thing is type not type dependency called upholds we have once dependencies where you start a service and the service pulls in a bunch of stuff that is needed for it to function but this happens once it starts and then if those dependencies die crash or are stopped then the wants or requires dependencies have no effect and upholds is like a variant of this which is effective for the lifetime of the upholding service and the dependency will be restarted so a classical example is where you have a container or machine that has an Apache HDDD service which also requires a database to actually provide any answers and one is not useful without the other so you make the top level service have upholds dependencies all the other ones that are necessary for it to function and then while system D will make its best to restart the children dependencies as long as they are needed and now it's easier to do this there is a new dot upholds directory like you have dot ones and dot requires and dot upholds and you can create those simulings in those directories at install time using the upheld by and and there is a bunch of new unit settings so open file is a just kind of a convenience thing where system D will open an arbitrary file for input or output in read mode or write mode from a service and the nice thing is that this allows the service to be less privileged so you can, I don't know, like if the service needs to read a certificate from a file for example or whatever or write to a file in the file system somewhere we can make the service less privileged have the manager open the file for the service and then the service just gets a socket not a file descriptor and this uses the same protocol that socket activation uses so you get a file descriptor and the file descriptor and a variable that describes the file descriptor gives it a name and then you can figure out what the file descriptor is for another kind of convenience thing is delegate subgroup so the kernel requires that if you have a cgroup hierarchy the process and part of this hierarchy is managed by is delegated to a unit and there is a process that does the management it cannot live in the like in the at the top of the sub hierarchy because the kernel does not allow processes to be in non-leaf cgroups so the process would have to create a sub hierarchy move itself and then do the management this was a bit of extra work that is not necessary so with delegate subgroup system you will do this initial setup and move the process into the right subgroup and another thing that is kind of touching on the stuff that I wasn't supposed to talk about so extension directories you can give a list of directories that will be used as overlays on the host file system for the service so by specifying just giving a name of a director here with a bunch of files in it you can populate the file system that is seen by the service I mean it's nice for well extending things there are other ways to do it but this one is very very convenient and in general not those settings that I'm talking about here but for security related settings there is always good to use system de-analyze security in case you haven't seen how this works system de-analyze security and the service name for this is this clear? large enough maybe I should do it like this so this gives a list of settings names and it's adding apples and oranges and cherries by giving some numerical score and at the end it says that the service is unsafe usually it shouldn't be taken too seriously and of course the service could be perfectly safe the unsafe means that it's not using the system de-features that system de- wants to advertise here but this list it's very useful to think about different ways to sandbox a service and I mean just it works as a checklist and a source of hints and I mean it's this has been around for a few years but still most units don't use it and it would be really good if people were putting more more sandboxing into the system I mean the kernel provides nice sandboxing features and we should make use of them another feature that is new is soft reboot and I wanted to make a demo of this so I I have can I turn this off somehow does this work, let's see I have a virtual machine here and I do a soft reboot so with soft reboot system D has three levels of restarting the machine you have the traditional mode where system D starts on all the services and reboots go through the firmware and the boot loader and the new kernel we have Kexec where we load the new kernel into memory and tell the kernel well system D shuts down all the services and then tells the kernel to execute into the new kernel and the new kernel starts a new system D and new set of processes and we have now soft reboot which works in this way that we skip all those extra steps system D shuts down the whole user space and executes itself and starts up the user space again so if I do this well it's restarted and the main benefit is speed essentially right I mean this is for the cases where you just don't need a full reboot and I wanted to show that if we look at the list of processes and I will do this not very nice way of just looking looking at if I want everything that does not have a bracket does this work now I tried this before grep does not please help me I want to have everything that does not match a bracket grep-v bracket this should work now so I have process one and a bunch of other processes and if I do soft reboot and if I do the same thing again I mean one is still there but I get a whole new set sorry I wanted to show this I thought it would be cool because it also answers the question that people sometimes have like what gets to survive and actually pretty much everything goes away this process itself is executed so it's running new code so this is also essentially replaced but it's just a number of states okay and there's also a bunch of like helper thingies so systemctl listpaths does what the name suggests listpathunits we have the same one for auto mounts it's just a convenience thing actually there's a bunch of those they get added every year or two and they just list the specific unique types in a nice way and I was talking about reboots I mean the proper way to reloads of services and restarts of services the nicest way to do restarts is when you don't close the file descriptors so that any service contacts to the sockets or pipes that your service has opened does not get a refusal but it's just delayed a bit while the services are starting and we have the notification protocol for it you call sd-notify and you attach a socket to the notification and you tell systemd to keep the socket for you while you are restarting and sometimes it's going to be a bit hard to figure out what is going on like which sockets are which so there's a new thing called systemd analyze fdstore and again I will do a demo I will do it on the laptop because systemd I want to have a nice example it requires privileges because I don't know like if you were able to look at the service and see what file descriptors restored this would possibly give away too much information so it's a privilege operation and this is an example for logind and you can see that well we have all the file descriptors they also have names so that it's easier to figure out for the service what those file descriptors are doing what they are for and there's a bunch of settings related to this so file descriptor store preserve allows file descriptors to not be cleaned up immediately by systemd when the service stops so you can have a semi-permanent thing that is kept by systemd and this of course creates the problem that if I mean sometimes you want to get rid of the file descriptor and well there is now a command ctl clean to get rid of those file descriptors that survive the process the service going away and sd notify has additional switches to send file descriptors with a name I messed up the rendering here of course this should be a double dash and another debugging feature is systemd analyze malloc so again systemd analyze also requires privileges so this works over dbus it sends a request to a service get malloc info and well it requires privileges and this is just a function provided by gdpc to get information about allocations and well the idea is that various services will implement this protocol and allow you to well see what they are using memory for and so on so I mean this was for the system manager and for the user manager there is some different answer and so a bunch of systemd services implement the protocol now not all I know can be useful another one is you can add and verify so again a demo when we added this so this was when it was added we found a bunch of bugs in our own well bugs bug let's I don't know how serious they should be in our own rules and we fix them all so those are the ones from the distro that remain and it's like like white space issues I mean like tokens that are run together then there is NFS that does some very strange thing because it creates a package file that is not readable which is a violation of the packaging guidelines and then there is some like minor other things we should probably get some kind of like an rpm lint script for this but I think it will be nice and of course this also finds more serious issues like syntax that is actually that seems to be doing something but it's actually not useful for example invalid option names and so on and another like a thing for to make the use of system denizer is more edit the verbs so we had system CTL cut and system CTL edit on unit files for a few years and now we have the for machine CTL and network CTL so machine CTL is the configuration files are for N-Spawn so if you don't use N-Spawn this is not useful for you network CTL is for network D, net dev and network files but also for link files which are used by you dev so this is a bit confusing and another thing that has been that has been changed let me try this the network can I edit this work I wanted to I wanted to show that let me try this so now I'm editing a unit file so this is not very this has been around for a while but we keep improving it so basically you get an editor that creates an override file so the the file name is well I mean without the temporary prefixes here but to make it easier you see the preview of the existing contents and also the cursor is open automatically in the right place where you would edit things so I don't know add something and I overwrite something important and well so editing requires privileges but of course dumping of the file does not require privileges and I don't know like in related also I don't know this is also a sim link so I can click on this and get it this opened in an editor okay so cut and edit for more files and like another new feature or a group of features is support for better booting or better not booting of a machine when the power is low so basically the idea is that when we are in the interd and before we have mounted file systems and before there is a potential to have the file systems mounted and data lost we will do a check so there is a system dac power implements a check of the battery that I mean dash dash low is checks that it's below 5% or more precisely it checks that the system has at least one battery that is discharging and no batteries are above 5% so getting the check right took like 25 iterations but I think we are there because there are you have batteries for which the kernel doesn't know the charge level you have batteries which are there but are not discharging and you have systems with no power supply and so on and so on and there is a new system d battery check program that runs in the interd it checks the battery if it's below 5% it says your battery is below 5% plug in a charger and if you don't do it in some very small amount of time it shuts down the machine so we will see I mean how it works in practice but the idea is that this is better than booting to the system very soon after it can be disabled on the kernel command line and there is also a new way that we handle hibernation because the problem with hibernation is that I mean it's easy to you pick a swap partition you write the memory contents to it and the machine shuts off but figuring out when you boot from which of the swap partitions to read the state can be complicated and also people use swap files and then I mean with partitions it's easier with swap files it's even more complicated so the traditional way was to put this information on the kernel command line but this well this can be out of date and then it becomes messy so the new idea is to write information about the system D which swap file or swap partition was used to an FV variable so system D sleep creates a hibernate location variable that describes the idea of the device of the partition an offset and some additional details and then when system D will look for this for this variable and use this to resume from hibernation hopefully this will fix the problem for people who have well for whom the previous approach didn't work and I think I have some more time so I the last kind of thing or group of things I want to talk about is system D repart so this is a screenshot from system D repart running on on some machine and system D repart is a partitioning tool that works in this way that there is a bunch of config files that specify what partitions are expected it goes through the config files takes a config file looks for the given partition if it's there and it has the right size and so on then nothing happens if it's not there it will be marked to be created and if it's there but it's for example too small it will be enlarged and so on and so repart is pretty nice and a lot of work has gone into it in this release so it has the following features so the partitions are created in an atomic way which means that first repart opens the block device uses a loopback device to get access to where a partition will be writes the contents of that partition or more than one and at the very end after everything has been written it creates the partition table or just the partition table so either you get a partition with the file system or you don't and also it knows how to create partitions with a file system with contents and it knows in the sense that it just invokes the file system creation tools in a mode where they write files to the file system that is being created this needs to be supported by various file system creation tools but the major ones do this and this is very nice because you get the atomic creation of the file system that is already populated and the new part is that system theory part now doesn't require privileges so before it used the kernel loopback device to get access to the right place in the file but this has the problem that if you are fully unprivileged or you are in a container this is not available and now it has been well, fixed, changed to not use a loopback device and it has like support I mean primarily in principle it supports any file system but the ones that for example allow writing contents when the file system is created work better it has also support for the invariity and so on and well, it's fast and it's important so basically the idea is that it runs on every boot and normally it doesn't do anything but you can add in additional drop-ins and it will create partitions it supports minimization of the file system which is important if you are creating file system images and another thing that is new is that it works a lot of work has been put into system d-repart and other system d-tools to support operating on a changer directory from the outside this is some of them did support that already but not all and I think it now pretty much all support and this means that system d-repart is used for the next iteration of MKSI so MKSI is like the system d image creation tool it started out as a tool to test system d in VMs but has grown into quite a useful thing so it has like a declarative configuration that lists set of packages and because we added complexity to other things so for example to repart we were able to make MKSI simpler so basically MKSI had this whole understanding of partitions and how to create well which partitions and with which size and so on and this all has been removed it now just has a directory where you put in configuration for repart and repart is called to create the partitions so MKSI creates a temporary directory uses dnf to or some other package manager it also supports dnf5 in case you're wondering to put files into this temporary directory and then tells repart to take those contents and put them into partitions in an image file and this is like I mean a much different way of doing this than we did before but I think it's I mean it has nice advantages in particular there's like this separation of concerns and MKSI itself is much simpler so it has declarative configuration like everything in systemd it operates on package names and this also means like you can specify anything that dnf will understand you can for other distributions you can specify other specifications so I don't know like versions and package names with versions or package names with version bounds and if you want to add stuff to the image the best way is to have a skeleton tree that will be just dropped into the right place and well it uses other systemd tools so it supports read-only images and signatures and deamverity and so on and for reproducibility we are still not there with full reproducibility but at least we are making logs and manifests of what is installed in the images and and I mentioned dnf and dnf5 but the nice thing is that mksi uses well can support pretty much any distribution that has like just a bunch of packages so it supports apt pacman for arch dnf and zipper for rpm based distros and it also supports gen2 so basically if for example on fedora if you have pacman and apt installed i'm the binaries you can create images of any other distribution without just it feels like native support this is very nice and unfortunately this whole rework has required a huge compact break between the last release which is like a year and a half old at this point and the upcoming version and since we are doing so many changes that we broke compatibility and the release is still delayed one thing is that in the previous version there was a set of stages and it was kind of fixed you build install and then you test and different people wanted different things so this has been replaced by something called profiles where you there's just a set of stages and the next stage or next profile can use previous profiles and the name is a bit it fits some of the use cases but maybe not the others and like I mentioned a lot of the heavy lifting can be moved to system the report and this means that mksign can create images without any privileges so it works in a container it's also faster and another thing that we are kind of getting rid of is that before there was always this mismatch of what happens of some things that you want to do when you are creating an image you want to do from the outside because for example you want to copy configuration from the host to the image that is being created but other things you want to do inside of the image because you want to use tools in the image and then this also means that you have to install those tools into the image and then maybe if the image is supposed to be very small to remove them afterwards it was messy and because we have been adding support for operating on a change route directory to all system details including report you can now the idea is that everything you do in mksign like all the build stages are invoked on the host and if they want to they will just use the change route like operation to switch into the temporary file system but in general the idea is that tools implement support for operating on a change route directory themselves and then this means that the whole thing can be simplified because you don't need to have anything installed in the image that you are creating then you don't need there and that's what I have and as always as every project we are looking for contributors mksign is in Python so it's and systemd is in C so we have everybody can get involved if they want to so questions please and then pass it on for folks who have it with the introduction of upholst dependency there are certain services that use needs to point to its network target so would you recommend them to use upholst and then add network daemon to it whichever they make use of systemd networkd network manager or would you ask them to still keep using the network target so network target generally does not mean that the network is functional so it's there's like a whole wiki page that explains the difference between network target and network online target and so on I think the the answer is that this is quite complicated because for services when you start a service and it's well it has been started and you know it's there but with network it depends on external state which you don't control and no matter what you do on the machine you might not get network right so it might make sense to use upholst to keep the network configuration daemon up but I'm not actually not sure how useful that would be because let's say that I'm using you're using network manager and it crashes you don't lose network right you you have some DHCP and it will probably continue for the next day or two so the answer is I don't know it's it's complicated any more questions for anyone else all right then I'm going to go ahead with the second one so back like five years back I used to I wrote the system the unit to actually mount a partition and I didn't know that well that was not possible I have to use the fstab file for that but I saw that you mentioned of the list auto mounts command and I wonder if it's possible now to automatically mount partitions on boot using systemd sorry the last sentence because I heard if it's possible to mount partitions automatically at boot with systemd that it has always been possible why not okay yeah say any more questions for anyone else okay well third question comes from me as well with systemd doing boot loaders with systemd boot and cron with systemd timer and network with systemd networkd virtualization with nspawn partition management as well now that you mentioned by when we can expect a complete world domination by systemd you know next year thank you so with the mounts maybe we should talk maybe you should give an example later because I don't understand the question I guess I tried using the normal mount command I added it in a systemd unit and that's how I tried doing it but maybe that's not the proper way of doing it oh yes so part of the answer is that so systemd had this problem where it would have a vision of which mounts should be mounted and then if the reality disagreed with that it would unmount things making people very unhappy and we have mostly fixed that I think it's much better than it used to be so basically a few years ago if you did this mount command chances would be that the systemd would actually unmount it soon after but now it would probably just let it stay but I mean still the recommended way is to use fs tab you can also create a mount unit there is no benefit it's just more lines there's also a new setting systemd mounts extra I think on the kernel command line which allows you to specify an fs tab like line with source and destination separated by columns and options and so on and then this will get mounted like if it was specified in fs tab understood thank you so much let's have a big round of applause for the talk thank you folks the talk is about to start hi we have max over here he's going to talk about ansible packaging and fedora linux and fedora apple it's not just about ansible but also the modules say good luck to you microphone as was said so can I take the handheld there you go so ansible is a configuration management tool used for managing servers network devices and other types of infrastructure and you can also use it to manage workstations as this one is okay so what components make up the ansible stack and fedora so before we had just one ansible package that contained many things it was a core runtime command line interface tools and many many modules for all different types of applications and recently well I guess a couple years ago by now the ansible package was split into collections and if you'd like to hear more about the origins of this I recommend you watch kevin's talk from the fedora 36 release party okay so first we have ansible core this is the core playbook runtime playbooks as many of you will probably know are these yaml files where you define the state of your infrastructure in this simple easy to understand format so ansible core includes the runtime for playbooks and it also includes command line interface tools such as ansible for running ad hoc tasks ansible playbook the cornerstone for running playbooks ansible galaxy which is a package manager for ansible content similar to pipper npm and so on and then there's also a lot of less commonly known tools like one that I like called ansible-console which is like this little 2e interface ansible-config ansible-inventory and so on and then also there's the essential modules such as dnf for managing packages and now also dnf5 and copy for copying files to remote systems and so on and then also we have ansible collections these are packaged units of ansible content these include roles which are like reusable yaml playbook code modules which are code that's executed on remote systems they're usually written in python or power shell for windows collection I know I mentioned windows at a linux conference what a crime but really they can actually be any executable file that takes in and spits out the correct JSON blob and then also there's other types of plugins that run within the controller process such as connection plugins for example there's the one that we all know and love the SSH connection plugin there's one for podman containers and for builder and there's a lot of different cool types of plugins that run within the controller so how are ansible collections packaged upstream we have ansible collections each collection has a name space and a name there's reasons for this due to the way it used to be tied to get hub repositories so this is a very popular collection the community dash general collection as we can see here it's here on galaxy you can use the ansible dash galaxy collection install command to install it and then we also package ansible collections in fedora which is pretty unique where one of I think the only Linux distributions that does this so we can see the community general collection transfers to ansible dash collection dash community dash general and these versions are actually out of date so I don't know what's going on with the fedora packages site but we can see that collections are packaged in fedora apple 9 and apple 8 and now we can see all these cool collections that we have packaged in fedora I cannot find my cursor which is just lovely there's plenty of different types of collections and then most of them are named like this some of them are not but we have machine generated provides which are consistent across all collections and now we have 21 collections in fedora rawhide 14 collections in apple and rawhide because rawhide has 3 or 4 in app stream and 16 in apple 9 when I originally put together this talk we had 9 collections in apple 9 which I thought was fun but then like 2 weeks before this talk I decided to package a bunch more collections for an infrastructure project that the copper team was working on so that's fun and then we also have the ansible community package which is a bit of a new fangled creation it contains a bundle of popular ansible collections that the ansible community steering committee curates and this was kind of meant as a replacement for the classic ansible package because once everything was split out we still wanted to provide our users like a simple batteries included experience especially for beginners and people who don't want to muck around with a bunch of collection dependencies so we also packaged that in fedora and now just I don't maintain all of these packages but I just wanted to highlight some of the other ansible related content that we have in fedora we have ansible lint which is a linter for ansible playbooks and rolls we have molecule which is a test tool for rolls and then ansible compact is a newish library that both molecule and ansible lint use we have ansible bender which is used for building container images using ansible playbooks and builder and I believe upstream is looking for additional help with maintenance so if anyone's interested and then we also have aura which is a tool used for recording ansible playbooks and it makes them easier to understand and troubleshoot and this provides an ansible callback plug-in which sends your playbook data into the database and then it's accessible via a nice jango web interface and then we also have vim ansible which provides syntax highlighting for ansible playbooks which I really like and then back to collections we have ansible-packaging which is a package that provides rpm macros and generators for packaging ansible collections again you came to this conference so you must expect to hear the word packaging a lot so now some of our macros which I maintain along with neil gampa we have ansible collection build which runs the ansible-galaxy build command to build the ansible collection artifact tarball thingy and then we have ansible collection install which is run in the install spec file section and that runs the ansible-galaxy install command to install the previously built collection artifact and it also sets up some other automation for later in the build process and then we also have ansible-testunit which runs the ansible-testunits command this macro should have had an s at the end my mistake and then this is used for running the collections unit tests ansible has three different types of tests which are all orchestrated by the home-built ansible-test commands so there's unit tests for what we run with an rpm builds and then there's also integration tests which is where you write playbooks to run your ansible modules and then check that they work correctly within the playbook and then we also have ansible-test-sanity which is basically linters I wish they would change that name sanity but oh well and then we also have some other macros that we use during the build process so now let's talk about generators so the ansible packaging generators creates the appropriate dependencies on the version of ansible core that a collection needs and also it can handle dependencies between collections so for example the awx collection might depend on the net common collection for some of its utility code so it would specify that in the ansible metadata and then our generators would pick it up and generate the appropriate dependency based on these machine readable provides that we generate so now this is a fun one we're going to talk a little bit about ansible versions so in fedora 39 we have ansible core 2.15 and we have ansible which is the bundle version 8 and each ansible version depends on a specific major version you'll notice that ansible core does not use semantic versioning predates that while ansible we decided to adopt semantic versioning for that and then in fedora 37 and 38 we have 2.14 and 7 and the community packages are supported for a much shorter time period than ansible core versions are so this is unfortunately already end of life and the schedules release schedules for ansible and ansible core clash a little bit with fedora because they're tend to be released around the same time as fedora releases so in some releases we've been able to integrate the newest version while in other releases we have not so now we're going to talk a little bit about apple we spend a lot of time maintaining ansible and apple so ansible core supported by red hat for a small set of use cases and it's available in the app stream repo and then in apple we provide ansible so in rel 8.8 and 9.2 there's 2.14 and 7 and then in centa stream which will become 8 which will become 8.9 they have 2.15 and 8 and do some shenanigans which are beyond the scope of this talk centa stream 9 has 2.14 and it will have 2.14 for the rest of the rel 9 life cycle while centa stream 8 will keep being updated and now another little wrinkle that we've had for apple is python versions, ansible core is a bit picky about what python versions that it uses so in rel it tends to use these alternative python stacks and we don't have as many things packaged for it and are supported for a shorter period of time and I've been able to work with members of the python main team to make sure that all of the macros for python packages were able to work with these alternative python stacks so now what makes ansible packaging in fedora special? I think it's special at least but I'm of course biased so we provide ansible rpms which is something unique to fedora and then we also provide the ansible bundle so users can pick whether they want the smaller set of individual ansible collections that they know they need or if they want the quite large ansible package that has everything and they're able to mix and match because the individual standalone collections and the ansible bundle don't conflict each other so maybe you want some collections that are part of the bundle and then one collection maybe isn't part of the community package so you can install that separately and I really like this because depending on a user's preferences they can choose whichever path they prefer and then another thing we do is we do a lot of cleanup on the ansible package because it's a combination of like 100 different upstream collections and they tend to have a lot of files and test files and development scripts so we clean these up from the fedora package you can see some of the code we have for that in diskit it's a bit extensive and we have been able to upstream some of this work but not all of it and then another thing we do is given fedora's first principle we tend to support new python versions before upstream does and where work closely with them and we do a lot of work on that and then the other thing is we're really involved in upstream I'm a member of the ansible community steering committee and I help co-maintain our upstream build tools we all submit patches to ansible core and collections so we can participate in upstream and also we're involved in releases and community documentation so finally how can you help we're always happy to have more folks who want to maintain collections or help maintain the existing ones I believe there's a review package hackfest tomorrow so I'd be happy to sit down with anyone who's interested during that tomorrow we're also always interested in helping with testing bode updates especially in apple and just generally quality assurance and then also some of the new ansible dev tools such as ansible builder and ansible navigator not yet package mainly because I didn't have the time to do it and I don't personally have a use case for it but I would like to see these in fedora eventually and then also I would be interested in getting fedora execution environments which is not a good name but ansible execution environments are containers which are used for running ansible playbooks and instead of running it just on your regular system so I would like to plug the ansible packaging rooms on matrix and IRC we have a lot of different distributions we're involved in the package community Kevin David and I are in there and we also have people from the sent us configuration management SIG we have people from void linux arch linux a couple of different distributions and it's I really enjoy collaborating with different distributions and working together with them and then we also have the ansible community room which is where the ansible steering committee meets every Wednesday and we also have some other general discussion there so I'd like to thank everyone for listening and I'm happy to take any questions as best as I can questions oh and then also link to view the slides and I'll be uploading these to the sketch page after the talk if you don't have any questions thank you so much for the talk welcome actually I'm used as my catchphrase it's Polish howdy something like it so it's well I'm Alex and I would like to welcome you to my presentation about SCA and updating for this is actually my first presentation in life when I have notes because in most cases I'm making something what in Poland we will name great improvisation and I'm quite good at it so this is quite new for me one customer presentation well the slides are so basic that Microsoft call me to give them visuals so haha okay let me introduce myself okay so I'm engineer and I would like to call myself system engineer I'm trying anything higher you know company politics and things I get I'm working at Euryllinox but right now a lot of my work is connected with Polish National Center it's center actually but everyone says center for recent development and it's connected with European Union and we are making the open source project risk analysis okay before I start I would also would like to thank everyone from Redcat and Fedora to allow me to be here I hope that I won't disappoint you spoiler alert I might a little bit but I hope not okay about the presentation well we will start with very basic concepts because I like to make it that way when we go to the update info and that it may be the poor man's SCA when we will talk about a little bit about the supply chain there will also be a small part about a few things that you can do actually with update info okay so let's get started so yeah very the software conclusion analysis that most of people will call SCA and it's the analysis of software and its composition and it's like dot wow analysis it's so smart and the software part is very obvious you know it can be like remote pilot some kind of real time operating system but in terms of composition we can think about it on the different level because for example we can think about the dependencies we can think about the ecosystem the documentation is mirroring system for example and things I get and this is my view on SCA and when we are making the risk analysis for open source project we are actually doing that way first we look into the project itself for example kernel is written in C RAS and things I get and how it's made if it gets through some conventions things I get there might be things like linters that we are using we are looking into the code mostly something we can make some analysis on the tests but not like the unit test but more for the performance there are some kind of these things there also might be like the not static but the dynamic dynamic analysis so you have to run the software and you have all types of trackers like profilers, tracers or you can run some two chains codec or sonar cube actually but it will give you very nice grade for the projects when there are the dependencies of course when you are making for the whole system and a lot of open source projects the depth are also the projects but they have their own grading and things I get and each ecosystem actually I manage the dependencies on some are better, some are worse I will have the one CI actually above the NPM above it and this is where the most the SCA is happening especially the security SCA because you have your own application and this application had a lot of depths every not true application available depths and most of the security scanners or whatever we will name it the dependencies are okay but most of them won't even look into your application so this is quite in my opinion funny but okay and then you have the ecosystem when you come to the ecosystem you have to think about things like the licenses because for example if you are using a GPO version 3 components well or if you are using any kind of license but why use the license so you have the proprietary software or same proprietary software and how you will manage that and the ecosystem in many cases is not about the technical point of view it can be about the legal it can be about the documentation software and things I get the more standard view on the SCA is that it's written from the one paper very short paper I won't provide the link there is some kind of hub when you can read a lot of scientific papers and you can actually get it there and it's like technical and I think about the risk of integration risk security and development risk legal and it's mostly about the licenses support but if there is documentation and if there is real support and management control but when you think about the version control we have to think about not like a git but more like the tags in git like this releases in that case and this is actually the funny thing because if you go to the SCA on Wikipedia they are using this paper as the reference in the paragraph and they actually are splitting the technical to the development integration and they put the security separate it's also quite popular belief to do that way ok so this is like the SCA now we go to the next term it's bill of materials and it's not the it didn't come from software this is like the pattern for some ball point pen and you think about it you can take the pen from your pocket and think wow there is the plastic housing for example but you can think that well this plastic housing is the ABS type I don't know 5 5 0 that is made with this and this when you think about the ball plan there is this little ball and this ball have to be made from this and this still there is of course some legend that the Chinese were able to copy this ball for a very long time because we require quite a great manufacturing process but bomb is not everything for example there is a huge argue if NASA would be able to produce the Rocketdyne F1 engines that were used on the Saturn in theory you have the blueprints have like everything but especially if there is the very quick development there are the people who have this very specific knowledge and they can accept the blueprint and things they get so the bomb is in most cases it's very helpful but it might be not enough actually to create the product and when it comes to the bomb we have of course the software build of material and well it comes from the normal and let's think about that you can say for example that our applications need to run web server then you can say that it's running Apache web server but it's different from other and then you can say that it's running Apache 2.4 4.420 and then you can say that it was compiled that way with that flags with that configure and so on and so on software build material also owe us to establish relationship it actually should be together so the dependencies how the build process work and also shipping and shipping should include the crypto sign the S-bomb is required for reproducibility when it comes to the reproducibility it's very long topic of course everyone would love to be one-to-one binary because it's the best actually if you can get like one-to-one binary you are at home actually you have from who is making the stream reviews and secure boot stuff also did some of this and you need to have one-to-one reproducible binary at the end so your S-bomb must be perfect and to do that it's actually it uses container to do that okay the next topic is CVS, CVSS and CPE and actually a little bit funny because a lot of people are talking for example about the CVSS, give me the CVS score you know we are using the IS infrastructure and things I get so even with abbreviation a lot of people add some word but it's just context so a lot of people are talking about CVS scoring and the scoring system so it's like obvious and anyone who is interested a little bit in security knows that CVS and exposures and to put it very short it's just a number some idea that will say what kind of exposure or whenever it is and it has a lot of data on it the CVSS scoring system there is the new version in preview right now you can go and check it and it's part of standard CVS they are very very very new CVS don't have this because it might require some time to make the proper analysis the CPE is a common platform numeration and in theory the common platform numeration each CV have a common platform numeration should allow you to search if your software is vulnerable in theory because it's a little bit complicated and I will give you two examples both are NodeJS but NodeJS when it comes to security is so good that it's a very good example in many places and you have the CV normally you have the CVE you have the A or S or whatever or operators whatever then you have the organization and the project if you are looking for the NodeJS you will probably look for NodeJS NodeJS but you have a different CP because of a different product because this product is bundled into the NodeJS RPM package or in most cases any package because it's very like needed the other one is a Rata for CRS CRS is also bundled and you might think that oh I don't have this CVs for CPs are ok but no, sorry guys it's not that simple sometimes if you are using something then you have to make this reverse dependency so it's a little bit more complicated than this ok it's duplicates and it's even more complex because sorry for the small font but the spot does not matter that much like 3 days ago I was looking for Fedora body system and the body system have a very nice interface and this interface will tell you about the security and the things they get so there was like 7 for Fedora 37, the fellow CVS were fixed and there is actually good examples because we have the one security update that does not ship CVS I would tell you why it happened later on the next year actually we have the ZenBread that is actually CVS fixed but in internal Fedora system when you have a CVS it will show bugs fixed and this one, well they didn't have that because someone didn't click enough information to the system and we have the one CVS that was in triage so everyone knows, yeah, there is something but we have to assess it a little bit more so there is a huge problem of CVS not found and the Dynastandberg that is the author of the main developer of CURL that is like the most used software in the world probably regulatory guys made very good article about why this is not the bad thing that way that the CURL have more CVS it's actually expected it's a good thing but I can say that we are part of the problem why as I said I am working right now and risk assessment probably for you can even know when the scoring goes down someone have a CVS scoring goes down because you can only score someone, you know provide the data and a lot of project actually won't provide the data why, because it's easier to just fix you know system D actually had like this hours for the wars then or something I get there was the critical one CVS vulnerability about auto-road access there was the big quarrel name it that way and before they get the CVS it was over, it was fixed it was simple fix and in cases of many projects you have to get this common platform enumeration before you get the exposure information you can put it and I get the common privilege for our systems and of course in the end there is a simple email to that and that address and after some time they will add it to the database but reading through the government documentation might be a little bit challenging so there is no reputation in most cases and it's easier to maintain, yeah this is a big thing because I was working with a lot of clients and organizations and companies but actually something don't want that update, in most cases don't want to update the system everyone is over staffed, overbooked and things like that so there is no CVS, they don't have to but if there is they are required by law to do that to make a fix and depending on where you live and what is below and what is your institution doing, it might be for example only 30 days and you for example have to ship the golden images to all your systems in a few clouds and some of them are very legacy and you have to update them because you are required by law to actually do that but if vendor somehow magically it might happen you will get the next update or the next update and there is the NDA and it happened to one of the company in Poland we are making the fushkas so the little jobs and one of the company not the ruin of something like that but some startup component actually hit that they put the image in the market place of one of the cloud provider and this image had the critical root access vulnerability and the first thing that happened NDA and you cannot inform your customers about it you cannot make blog posts about how did your images know why because these images in most of these market places are before we get to the market place and there is automatic review and there is manual review and things they get and it does look very bad for the companies so yeah it's quite complicated topic when you think about it the next thing I would like to say is that we can be only as good as the database that we are using and there is very good paper of comparative study of NBT is reporting by software composition laser tools researches I'm sorry that I didn't put the names but I will just kill the surnames and names so I believe that it's more respectful that way and the most important thing that they get there is that SCA can be only as good as the database that is underlying this software also they find out that a lot of tools just believe the user and there are a lot of dependencies but the good ones are actually pulling them and check because for example if you are using the python or go or whatever and you just want to use some framework you can put in your go modules or your requirements of python for example flask in python and you said that this is fact I don't know 808 and when you install it and a lot of this SCA tools actually won't do that they are very dumb, they will look into what you have and they will just believe you even if you have for example the older version of something might pull the dependency that is vulnerable going back to Fedora or RPM ecosystem when you think about it RPM database is the fact of software build material the database of the vulnerability is stored in the update info files I will have a little bit later more about the info and then it's not trivial as it might seem to be security scanner and the thing that I like the most is if there is the report there is the fix available so it's very natural for people and a system administration whatever just to use the UMS security and when you think about it the security package may not have the security option then it deliver the power of update info updates have the categories so do you want to enhance your system oh guys we have the bug fix it might be interesting but you can skip that or you have a security update but you might be require to install ok and the security update have impact because if this impact is low you might be not require to update it at all but if it's critical then you know alarm you have to update as soon as possible and there are links to CV so you can actually access because the new scoring system actually use things that is called temporal matrix and environmental matrix things they get so the security officer in your company can say that well in theory this is critical but in practice after our own assessment to our environment we know that this is low so it's quite important that you can get this information and there is even more goodies you have a version working yeah I'm not fun of it but you can do that you won't get the information you might exclude some packages yeah and you can get this package from this repository and this package from this repository and there is actually an update info is something called restore required I have no idea to be honest there is command that is restore required but it doesn't look into it so yeah in theory there is this kind of info but I have no idea if anyone is using it and when you think about it shiny new but not better when it comes to the new way of delivering software so yeah snap refresh does not have security so snaps are always refreshed and there was proposal for it to like more than a year ago nothing fiatpack does not have the security there is even the dedicated website where fiatpack is bad I won't guarantee that it's correct and up to date but there is and it's quite good website because it's very short app image it's container when you think about it so good with updating the container you know there is like no new version there is nothing and also no wide open pgp data found on most of the app images it's like really guys it's not that hard to sign something so I made another meme but you know there is like no security and it's important because mod of system administrator want to and have to maintain the smallest update possible because you are working not only with your organization but you are working with vendors for example application and they say this will work if the Lipsy URL is this and this and if you update it like and something happened we're like oh sorry guys you have to pay us money to fix that it looks like this sorry it's true okay so let's go to the update info once more it might be a little bit bigger so it's like updates on the top level and inside there is the update I will go to the update and let's go so it's the simple XML and in update info this update will be like this I compacted it as much as I can the important part is like you have the type so as I said enhancement security back fixes security you have the severity you have the references this is extremely important part references and you have the packages list there is the collection but you can skip this collection you can name it whatever you want it doesn't matter actually so when you think about the references type it can be like CVE bugzilla or over unknown but if you look inside the code it's actually many but how can I have no idea the name of the library and the episode and the episode actually accept anything and how can I actually make these types of so if you want to for example have your own type it's quite easy I would say but even very easy there is the error time importance as I said and it can easily be mapped to the CVA because the CVA scoring has this bracket like low, medium, high, critical there is actually none but if there is like CVA that is known let me know I never saw anything I get so yeah you can map it and it's simple there are some changes as I said before the database is very important so it requires effort from then during developers there is no feature standard you have to look into the code of course just copy Fedora Red Hat or whatever it will work there is the problem that some packages I provided a large number of packages actually like Node.js is one of this example and there is a lot of embedded and there is also the problem of the program source packages because after the compilation there are packages that I use blobs, jams, swap packages so you start the compilation for example we will compile some Python because that requires some Python we are going to 0.7 in row 9 and after the build it discarded but when you think about it an S-bomb and things like that it should be included because it's build dependency so if you want to have a full S-bomb and drill and to make the real analysis you have to know that and with jams I get that sometimes you need some kind of rubbish jams that might be proprietary just for compilation so you can discard them after and it's illegal but still there is quite a gray area when it comes to the analysis and this is my favorite one where is the paper not out dependency or equal an empirical study on production with NPM and where is the funding first one out of 100 projects that we choose for this study only 51 have the production dependencies and 39 and for 39 production dependency presents less than 1% less 1% of dependencies it's wow and why I'm saying that because when you think about the RPM database and then as a scanner RPM database is very strict if you install something for example I want to install MariaDB it would tell me which dependencies and if I ask about what I have in my system I will get the cool information about it the dependency is 100% true software and very explicit in most cases I said there is some bit of dependency I think they get production use seems well forever probably I grow 6 or even 5 something like that and to sum it up like in OSCA the update info actually is as good as the underlying database so if you will get something from this whole presentation the first point that you can treat the RPM database as as bomb we actually are doing that we have the RPM database and then as a scanner plus updater the full blown SCA style ok let's talk a little bit about the supply chain and the definitions are a little bit blur so I will just make a few points which component I used what are the relationship between them for example this is the build or shipping relationship or something is needed we have that in RPM by the way who commit it so you know you get the source code and much much much much more and the thing that a lot of people don't know that it must have you have this Moscow model I know that word Moscow is not the most popular right now but the Moscow model is must have, should have, could have and won't have this time but at the moment the as bomb are must have there is president or order in the US but actually will force everyone and sorry guys if you want to use software in US that is the biggest market in the world by the way who have to it's not like maybe we will implement it no, it's must have ok so one of the supply chain like standard salsa I get I have very good visualization of it and it provides the build levels at the moment and this build level like there is known you can say how you are making things so it's probably showing how package was built there is the signed, generated but hosted build platform whatever the hosted build platform means by the way they have very good dictionary about a lot of things in salsa but hosted build platform is actually not one of them and the other thing is that this platform is hardened and well I will say that I was looking into the salsa before version 1.0 so yeah some time ago and version 1.0 is a joke, it's a joke guys it's a joke, sorry why because concepts like source requirements hermetic builds or reproducibility and common requirements discarded it's it came from open source but they made it for everyone to the point that no one would be happy with it in my opinion at least and the current state of salsa also looks like an advertisement for the build platform sorry, we have like 4 build platforms and the only one is one you can host yourself and it won't get you the highest level by the way and some requirements like the build cannot be tempered or the signing cannot be used a defined step so who can define the signing it's like no I'm not user of the system I'm not compiling here we angry barrier maybe in my case but well for me it's a joke, sorry the other standards but I actually used spdx psychondex a sweet probably I should name it sweet and two of them are actually standards also in European Union so you can use them where is the paper like S-Bone server from 2021 so small like the source of that information yeah so we can use that standard and when it comes to spdx it's actually available for some parts of Red Hat right now it's better I know that but you can get it from some some containers and some software of course okay the important thing that I would like to know that update info is ready for S-Bone because someone was smart by the way because there is like over it don't have to be reference don't have to be like CVA or bugzilla, it can be anything nothing stops the update info to have the reference to software build material it does like so when it comes to the RPM word it's more than prepared everything is ready but we are not producing the good S-Bones but I would tell you about it a little bit later okay so what you can do with update info we are going to the end of presentation, shall we one other thing that you can do with update info is review I'm actually for that I have an abstract that I would like to show some new toings that we are working on but I think happen some changes for some few orders or whatever we will name some companies like one I'm working for so yeah I had a lot of things to do in my life during that time some of them pleasant some of them known right now I have also very big problem motivation because of that but yeah I will provide it later I promise and one other thing that you can do is review it's very nice but it's an old format it's not suitable right now the search engine will actually use it the other thing is the simple scanners this is very small part of one of the project I'm doing I cannot show it because I'm required not to show it by the way before we ship it so yeah you can have your RPM there's like never actually it's quite the same information and you just can use your CVS in your system you have a trigger and this is actually something that we already have working in that project and you make the simple query on update info and simple query on the system to save that query like RPM minus query all minus format name probably some sign because well you might have the different vendors from with the same names and things like that so crypto sign is actually the best when it comes to determine the packages we manage a lot of Frankenstein systems but part raw, part oracle, part ruinos and things like that so yeah so we are doing that way we use to rebuild images because normally you have to spin for example virtual machine or container or whatever but it takes time in many cases you don't have access because the people who build golden images for your organization may be in the different department that for example security and things like that so they just have to give them a very simple text file and especially useful for golden images so type of images are used across the organization or standard also you can make the Erata portal and Redcat has a very nice Erata portal Armahead Erochiha so yeah and all of them are actually using update info so yeah when it comes to the future and my dreams I will provide this Erata new generation the way of what we have and sometime as said motivation is quite low at the moment I would love to have the RPM tag with CPM you know it would be so great because then you can just ask the RPM database then you can just some CVS search software and you can get all the informations like this but it's not possible because adding the tags to RPM is not that simple actually RPM5 that is not used by the way could have custom tags RPM4 does not bad maybe we can use the provides and then CPM and this provides actually should be good enough and the current build system like Kodi for example always Kodi I know but there is a lot of new build system some of them are good some are worse provides simple S-bomb you have this and most of the system including are provide installed packages so if you are making the RPM you have the information about the installed packages and this is this software build material during the build process but it's also important because of this reverse dependence in things I get that you can have this reproducible quite reproducible environment build environment so it's very important and once more we are already ahead of everyone because we have them but they are not in S-bomb format that is like generally accepted so there need to be a little bit of work done and we are working on the new build system our current build system is used by a few companies actually and the new one will be open sourced and well I said something on fire actually because our build system was always source RPM based we never use that this git or anything I get and we always provide the batches so we have customers who are rebuilding some systems and we just give them batches and we are making that public well 50 minutes actually it would take 50 minutes to clone the longest part would be that you need logo for your new system so yeah and this is my dreams and this one is actually founded we have a founding in our current grant because we need to create the new build system by the way because we need the S-bombs proper S-bombs to the risk analysis so we have to create the new build system none of them was good enough yeah so there's like my future ideas dreams sorry once more but I didn't provide something that I should but the word is on fire so yeah so it was quite hard to make anything else for me okay this is the end any questions or to answer any question that you have so hi how do I do it assuming I'm on Alma Linux what how do I use update info oh okay so every time what you are asking for example when you are asking the repository about the repository it actually looks like this URL and the URL or DNF will add the repo md that is metadata this is like xml file and update info is one of the entries in that xml file then the URL or DNF and it will solve and it will okay and whatever this whole stack will actually pull that update info for you it will be probably compress it, read it, parse it things I get, all the magic and then it will just print you if you have a security update in your system it can be used by the way the most of the admin that I know actually is DNF minus security you can also set the level that is required and you can get the security and then info so you can before you update you can get this information of the CVS and things I get and it's all like very natural for organizations thank you thank you thank you very much thanks for coming awesome thank you everyone since we've ended a little bit early the next talk this in this room has been cancelled so there are other talks going on in other rooms I'd be remiss if I didn't remind everyone if you haven't already gotten your badge for flock there is a QR code in the back of the room and we'll be picking up back here after the coffee break here in a little bit so thank you all very much welcome back everyone to the Fedora Leads and Linux Distribution Development track and we're here today with Lukas Ruzitzka for a talk on auto testing in Fedora so I'll hand it over to you hello my name is Lukas and I work for Fedora Quality Engineering and today I would like to give you a step-by-step guidance on how to set up OpenQA I'm not going to go over the concept much deeply because yesterday this was done by Adam already so those who were interested in it basically know that OpenQA is an automated test tool originally it was developed by Suze it still is developed by Suze Adam also adds some patches into it so Fedora also has some part of it now and it allows to test various features of operating system using the hands-on approach as if a user would do it it basically creates and runs a virtual machine loads it either from an ISO file or you can just look out to image and it then performs various actions inside the virtual machine compares the expectations to the real-estate and evaluates the outcomes the OpenQA architecture is basically that there is a controller that does the scheduling stuff and the web UI and job handling then there is a worker or there are multiple workers that do the actual testing so you can have many workers you can just have one worker it depends for the local installation that we are going to talk about we are going to use one worker because I think it's enough to consume a lot of our memory so first we are going to install OpenQA that's a fairly straightforward process because everything is packaged in Fedora so basically we use DNF to install the whole stack and especially the packages like OpenQA, OpenQA, HTTPD OpenQA worker is a team and Fedora messaging if you want to consume the Fedora messages but if you don't you can omit the Fedora messaging messaging package so to start with the DNF install OpenQA, OpenQA, HTTPD and the rest of the packages is a good start and this will install the whole stack you will need like two minutes to do it for example which gives us 18 minutes left now when everything is installed we need to configure the HTTPD server that's very simple too because basically there are template files template configuration files OpenQA packages so we only navigate to HTTPD point D directory and then copy the OpenQA Conf template into OpenQA.conf and we copy the OpenQA SSL.conf template into the OpenQA SSL.conf and we enable the HTTPD to connect for SeLinux and restart the HTTPD now basically that's the first part there were times maybe two years back when the SeLinux cooperation was not that great so it was recommended to switch SeLinux to the permissive mode this is not longer required so you can operate on enforcing mode quite safely and without any issues then we need to configure the web UI the configuration resides in the openQA.ini file and basically we need to do two settings to the file under the global chapter we find or global section we find the branding which Fedora recommends to set to plain the other option being Suze I believe and if you wanted the nice chameleon you can use the Suze branding otherwise it's totally the same just there is the quite nice logo of the chameleon and the download domains are fedoraproject.org the authentication can have multiple modes but for the local instance the fake authentication is good enough which basically creates a demo user and you will control the web UI using the demo account normally in openQA production there is the openID authentication so you can use your fast account to control the web UI if you have the rights and if you have the permissions to do so but for this local instance we are not going to need it then we will install and configure our database so DNF install PostgreSQL server let's initiate the database with the PostgreSQL setup initdb command and that's basically it now openQA is ready to be run and you can see that there is a procedure of several services that you should start and you have two options basically either you enable them and you make them start all the time but because I don't want openQA to run all the time on my computer so I usually start them with a script and I believe they should be started in this order, actually I didn't start any other order I always followed the guidance we have on the wiki page so first you start the PostgreSQL and Httpd, openQA Guru openQA Scheduler openQA WebSockets and openQA WebUI this is enough for the WebUI to start and to make further settings but at this moment we are not able to do any testing yet so let me start the openQA for you it's now installed I am using the start openQA.sh script which basically starts it in that order and now you were able to see that it's unable to connect because the server wasn't running but now when I hit F5 it still is not running how come? we debug that now so I don't know oh yeah it shouldn't be great so now you can see that openQA is running that's the WebUI now yeah it says Http localhost yeah so you go there open it you see the WebUI and you click login and that would immediately switch to logged in as demo right and now you find manage API keys and I don't want to show you my API keys so I'll switch to the presentation here and manage API keys you click create to create the new keys and I'll show you it to some files I'm going to show you in a moment there is the expiration mark or radio button that you can check or uncheck so I didn't know about it or I didn't pay attention to it first and I was pretty much surprised the expiration usually is one year so after one year I was quite surprised that it didn't want to start the tests and it claimed about there is no API and I was like why what's that and then I realized that the key has expired so for a normal use you could create an infinite API key that will never expire if you uncheck the expiration button then you edit the etsy openqa-client.conf and under the local host section you copy the key and the secret the secret is the second part of the key it's quite visible in the web UI and that's it we have installed and set up openqa to work and when we start the worker system CTL start openqa worker at one so this will start the first worker and connect it to the openqa and now we are ready to run the actual tests and this can be done in 15 minutes if your network isn't extremely slow which gives us 5 more minutes and that's downloading the tests pardon? if you have a dial up I don't think it's going to take you very much time installing openqa but it's going to take you some big time downloading the tests which is say I'll be able to finish college between you download the tests and start it from my experience the problem is I don't know how dial up connections work nowadays but a year ago I have the LTE connection back at home and it wasn't the fastest connection in the world I got problems with downloading the test repository because it has I think 14 gigabytes I think the LTE connection would have dropouts occasionally so GID would complain about them and it stopped working so this was a problematic thing to download and once I had to head to the redhead office to download the repository I know that you can use the GID depth one and you only download something but it still is quite much because of the needles and we are going to see what the needles are so I'd say my 56k modem is not going to be downloaded any time soon actually you don't need to download the tests to work with OpenQA but in this talk I am assuming that you can because I will tell you a little bit later so basically you go to a packier repository and you download the test into OpenQA tests then you do the repository GID clone you clone it as Fedora and you change the ownership to Geco test Geco test is the OpenQA user and so that it has rights and permissions in those OpenQA test directories you need it to give it the permissions and then we have the tests we have the tests you can see it on the right hand side but the tests are not loaded into OpenQA yet so we will do it with the fifth loader tool and you can see that the test updates updates. 5th which tell OpenQA a couple of informations it will give OpenQA informations about the available machines about the available products and about about the test suites that it can run it and you load it with the fifth loader Pi application it's a tool written by Adam and it works wonderful I must say because before we had this the tests and the machines and the profiles had to be defined by YAML file or in YAML files through the web UI and when you made a slightest mistake in the YAML file it would complain and never work and this fifth loader changes the game totally and now it's very easy because if you need to add well from the user's perspective it's very easy actually because if you need to add another test so you can just take a look how the test is defined and you copy the JSON section and then that's it right basically so you load it the slash C means clean all and the or dash C means clear all and dash L means load templates of 5.json this will give you the basic Fedora tests if you want to load updates testing then you use the second template with updates and that's it 20 minutes and everything is set up and ready so just some explanations we will work with machines tests group images jobs test suites test and needles so a machine is a QEMU based virtual machine basically there there can be different but we are not using anything else just the QEMU based virtual machines and you can set it in the machine section of the template file so for example the ufi x86 underscore 64 is defined like this so the architecture is 64 bit the partition table type is GPT QEMU CPU number of CPUs RAM the VGA driver and so on ufi is one which means true in Pearl so you also have the P flash codes and P flash wires to define the the proper QEMU virtual machine here in this machine section you don't need to define anything else unless you need something very specific normally you would just use one of the available which is either a BIOS or the ufi machine a product is something it's like a group of tests that run in the scope of of the Fedora flavor and it can be for example workstation server or everything and then you put tests inside of these groups so when you run the workstation then it runs all the tests that are scheduled for the work station and it doesn't run the tests scheduled for server and so on and you can define this in the product section of the template file so for example the Fedora workstation life product is defined as this so the distribution is Fedora which is the entire thing the flavors name is work station life ISO and then you have settings with variables telling the system that the desktop should be gnomed that it should use the install default upload test to actually create the installed Fedora image the HDD size should be 20 gigabytes it should be it should run out of the life the packet set is default and the test target is ISO these are however user made variables so you can use them in the tests if your tests could should be structured without those variables you don't need to define them but our tests are structured around them so we make differences in them according to the desktop type so for KDE some specifics are used and for a gnomed too so we usually have one test that can have various branches depending on what we need so some branches switch on for KDE and some other branches switch on for gnomes this is how a profile is defined in a profile section and this basically tells us that the product Fedora workstation life ISO should run on a 64-bit machine which is not the UFI the 64 bit is a BIOS machine so these chunks are taken from the code so this is exactly how you do it and then you could define your own profile telling open QA that your section should run on a specific machine and then there are test suites and they define how a test or a group of tests will run and it allows you to set the test variables to control the tests so basically you can use the variables that are defined in the machine you can also use the variables that are defined in the product and you can also use the variables that are defined in the test suite and I believe the later you define the variables then that the more valid are they so if you have the variable which has the same name and a different value so the test suite value would override the machine value yeah you can override that by pre-pending the variable name of the plus it got very complicated because over time we realized we need to sort of do things in different orders but more or less yeah there's an inheritance normally also it's good to mention that on the production the variables are pre-filled by the open QA scheduler and by Fedora messages that are coming into it when you run the tests locally some of the some of the variables are not pre-filled so the tense the test might break so then you need to fill in the variables and we do it on the API CLI command that that's the best bet I think because you don't need to update the templates so what this means basically that there is a desktop terminal test that runs for Fedora workstation live on x86 and ppc64 and that it boots from the hard drive the hard drive being the disk some flavor and machine variables dot qcow2 the flavor and machine will depend on those variables being set in the in the in the machine for example or in the other in the other test that runs before because let me show you the install default upload the deploy upload test called install default upload will basically install Fedora and upload the installed image to the to the open QA and then this would start after the test deploy upload tests so after install default upload and it would use the image created by the install default upload and now the post install variable says just take the desktop terminal and run it there are several ways how it could be done but when there is a test that should follow the installation of the image then the post install variable is the cleanest way to do I think a needle is how open QA would recognize what is expected we need to tell open QA what we want to to do so we define needles needles are PNG images with defined areas so I decide a small some portion of the of the PNG and open QA will look at it and it will try to compare it with what it sees inside of the running virtual machine and if it finds it it will do something about it it might click on it or it might just check that something like that is there and if it is then the this tiny little test will be will be passed and if it doesn't see it it will complain and it will fail so you can for example check that there is a nice Fedora logo in the upper left corner using open QA by defining a needle with that logo and basically the needle is two files it's the JP PNG file with a screenshot there is a mistake here on the slides it should be a PNG file with a screenshot and adjacent file with the area definition plus some other info each needle consists of two informations the area description and the list of tags and it looks like this so you can see that the tag is evidence about shown which controls that the about window of the evidence application has been shown on the screen and the specimen picture will be taken from the evidence about shown dot PNG and it would start on X position 445 and Y position 286 the area will be 133 pixels wide and 146 pixels high so this is like something like a square almost and the type of the needle is match match means a visual comparison and that's what what works sometimes or I have heard I have read in the documentation about the OCR needles but yeah mythical OCR needles that I was told during flock that they work and that somebody on Suze actually tests battle of Wessnath using the OCR needles but it seems that the code needed to run it is still not merged so you need to patch the open QA and I haven't had time to test it yet when you want to write the test of your own you don't have to now you understand everything and you can locally run all the Fedora testing stuck but if you want to write a test that's a per script that defines what you do inside of the virtual machine and what you want to expect so you basically define some mouse actions some keyboard actions some checks and evaluations and you can also evaluate script outcomes so basically you can test some graphical user interfaces but you can also test some CLI commands and all that will work and tests have various statuses such as past failed soft failed running and so on if you want to create the test you need to create a pearl module put it in the test directory off the open QA directory that we have created having clones the the repository and you should use the libraries in the lib directory you don't have to but it's there it's been already created it's been already programmed you don't want to reinvent the wheel so you can check what commands the Fedora has made for you and you can use it for example if you want to do some login you can log into into a console as a root for example so you don't need to program all the stuff and all the typing and all the checking but you simply use the login to consola and something like that or login to the graphical graphical session and it's in the library so you can just take it and use it in your tests and then you should probably study the test API which is the description of the commands uh at open dot qa api slash test api then uh each of the tests should have a test header which basically is what libraries or what other packages it uses so uh this will tell me that as a base the installed test is used strict is the pearl thing which keeps looking that your code is correct and that you don't do dirty stuff with it because normally pearl doesn't check for example for uh for how do you say that yeah if you define a variable so the namespace should be limited to a subroutine for example or to the entire package and uh paranormal it doesn't check for it this is the only support pearl or other languages as well uh the tests should be written well are written in pearl the whole thing is programmed in pearl uh but apparently according to the documentation the tests could also be written in python probably i have never tried using using pearl makes it dirty by default doesn't it pardon using pearl makes it dirty by default i don't know it i don't know if pearl would have been our first choice but the other alternative is to go and write our own whole thing and i don't know i don't pearl couldn't possibly be that bad than having to go write your own framework from scratch and i'll adam will know more yeah just quickly they they have this crazy translation layer upstream which i don't remember the details of how it works but it uses some fairly janky stuff and you can write a test in python and some team internally at red hat i initially turned this off because i thought it was so hideous but some internal team at red hat asked me to turn it back on so it's now on in the fedora packages i haven't tried it myself but it should work it's using pearl to write tests is not that terrible because tests tend to use the functions from these libraries that are very very simple functions and probably quite well written so most tests just tend to be strings of type this assert this screen then type this then assert screen and it's very kind of formalized so you're not writing ugly pearl most of the time there are a few cases where you write ugly pearl but yeah yeah so for these tests a lot of the tests it's fairly simple so it's fairly readable yeah yes dim you can write good brawl it is possible it's just it doesn't care if it's readable or not so it all depends on the person reading it so the tests that they're talking about should be reasonably legible for me for example i uh doing this i i i saw pearl for the first time but i somehow got used to it now and uh it's okay but the the truth the truth is that uh sometimes we fight over readability with adam because uh he is better at pearl than me so he thinks it's super readable and i think like maybe not well if you work in the sewer every day you get used to it eventually yeah so uh basically if you use strict it doesn't let you do uh variable definitions with wrong namespaces for example then use test up is that you should use the built-in functions and use utils means that you use the basic fedora library where most of the pre-programmed fedora routines are placed so test up is the total basic if you don't use test up it you will not test anything and if you don't use utils you will need to do a lot of typing then the test file should have a sub routine called run so this is basically a function where everything what you want to test is put anything else what's outside of the scope of the sub run sub routine run will only be valid inside of the test package uh and then you add another sub routine test flags where you can define what to expect what will it do after the test finishes what will it do when the test fails or if you want it to fail because for example the flag the fatal flag will tell that uh if the test fails then the whole test suite fails but if you don't want it because you have other tests inside of the test suite that do not depend on the first one you don't want to make it fatal then you can ignore the failure and that means that uh it will ignore it and continue sorry you can set the test as a milestone which means that uh after the test finishes the state of the art of the virtual machine will be uploaded to open qa again and then the subsequent test will be starting off that milestone so this is for example useful okay thank you okay this is for example useful when uh you want to test an application and you don't want to do starting it all the time but you would like to uh take all the subsequent tests from a clean application so we start the application once it's running we will upload the state of the art to open qa making it a milestone and then we for example create a new file and then I don't care what happens next because the next subsequent test will return to the milestone and again will start with a clean started application so sometimes that that's good for example when a test fails and the subsequent test would expect something what's not there because the previous test has failed so I can fight it with this rollback and always rollback means return to the milestone uh when you don't set any flags then everything will be zero I believe and maybe or maybe fatal will be if you don't set any flags I believe it will roll back if a previous test module fails uh-huh um but it won't die because fatal's default is zero I think yeah I always use at least one flag so that I know that uh what it should do basically and uh there is a test example for desktop printing so it looks like that but I'm going to show you another test uh these are the libraries that currently are available so uh it probably is self-explanatory like modularity dot pm would be functions that we use when testing modularity uh fedora distribution will be uh functions specific to fedora cockpit or bugzilla you know it's uh you can expect what's what's in there what makes a library a library if you want to create one uh it it's another pearl package that starts with the package keyword uh then you give it the name like for example package desktop tools and uh you use the base exporter and to use exporter and then you export the variables using for example our uh export uh is start gnome software and install application this will enable the subroutines in this package to be used very easily without having to call anything else in the test files without exporting it you would have to call that like desktop tools uh colon colon and then start gnome software which is not very convenient so it's good to export those functions and um now let's uh take a look how we can create a calculator test a very simple application test for a calculator the test will be placed in the test directory of the repository files or the repository directory and uh we can start for example by touching it and uh before we start to write it we can register the tests in the templates to make sure it will it will run in open qa because normally without the registration in the templates it would not work it would not start uh bad thing sumantra is not here because he's going to need this okay hopefully he will see the recording okay so the test must be registered in the uh templates and you want to give it the architecture the product and the variables so basically you need to add a section to the test suites section that has the name of the of the test calculator you will give it the profile where it should run so this would run on workstation live iso 64 bit machine so it won't run on the ufi machines just the bios machine and uh it will take the pre installed uh iso uh it will run the calculator test and it will run it from the disk flavor machine qcow2 uh whenever you make a change to the template file you need to reload it to open qa so you did the fifth loader uh dash c dash l templates dot fifth dot tason and uh when we want to run this test from a pre installed image which is also possible uh we can replace some variables and uh say that the hdd one is not something generic but it's a specific one it's the workstation dot qcow2 and uh the user login is test and the user password is weak password and uh now i am loading the tests uh using the entry point system uh which allows me to put uh a list of tests that i want to run and uh so i can start with graphical weight login test which log logs me into the into the system into the gnome session and then it runs calculator test and again uh i need to load it using the fifth loader so the basic syntax of the test file is this we have talked about it there is one more thing i wanted uh to to stress to you that uh each test module must end with the one because each pearl module must return a true value which is defined here uh on the line with one if you don't do it it will complain and it will not run and yeah yeah it's uh it's problematic and uh it bit me a couple of times in the beginning so don't forget about the one and uh then you can create a sub routine that will only be valid for the test itself for example you want to repeat something a couple of times and you don't want to repeat yourselves so you can define sub delete result for example and uh say that this result this delete result sub routine will always press the escape key when called so such a simple one right but then you can use delete result instead of send key escape and uh if this is more complicated then you can save some time typing the stuff all again and all again you could theoretically take this sub routine and place it into a library if it makes sense to you and then we want to start the application so in gnome normally we can hit the super key and uh we do it with send key super uh we type string calculator the max interval 10 makes it type a little bit slow uh in fedora you can find a wrapper for typing strings that is typed safely or typed very safely but i'm not using it here because uh i wanted to make it as generic as possible using just the test API commands so the max interval makes the typing a little bit slower so that uh the GUI has time to respond and uh the texts are really what they should be because if you type too quickly sometimes letters are gone and then the tests then the texts are incorrect and basically the tests fail because of that of the typos that are made with the engine then we send the enter key and we check that the calculator has started assert screen means check that we see this particular thing in the screen then uh it's merely some clicking assert and click would make would mean that uh it checks that the needle is there that there is the widget that we want and if it's there it clicks on it uh you can uh add the button uh parameter to the command and you can define whether the button should be left right or middle if you don't do anything it's left so normal clicking is assert and click also by default the open qa will wait 30 seconds until the widget appears so if you think that it should be there in 10 seconds and you want to specifically check that it starts in 10 seconds then you could define a timeout parameter and you could make the timeout to be 10 seconds if you leave it out it's 30 seconds for some tasks you might need longer so you can make it longer timeout 60 seconds 120 seconds for some installation purposes it's maybe 400 seconds of course for uploading tests yeah that this is quite long uh so basically click on button five click on button add click on button seven click on button equals and check that the result has been shown so that's one part of the test then we can multiply two numbers but no clicking just using keyboard so we will type the string 12 times 15 uh max interval 10 we hit return or enter and we check that the result has been shown we delete it again and then we can switch to the keyboard mode uh using the control alt k combo sending the escape key typing a complicated string with brackets uh that should basically be very clever about what to calculate first you know and uh we hit enter and check that the result has been shown again and that's it we have the test now we have registered it so we can start it oh yeah i'm going to do it in it so uh open qa vpui now will show us everything about the tests but it can't be used to start the tests actually or at least i don't know how so you are using a open qa cli command to to make an api call to the open qa server and uh basically it's it looks like open qa cli api slash act dash x post uh isos or isos and uh you uh you use the file dot iso which is the iso that you want to install yeah just quickly to make this part a bit less scary possibly there's a slightly higher level running you can use if you're okay running on official fedora images called fedora dash open qa which is the same thing the official schedule uses and with that you can just you can use it to just say hey schedule on this image from this compose and it will do everything for you but the tradeoff is that it it can only schedule for official fedora images and it will need to download the image so i think lucas doesn't use that because it would take a long time to download the image on his system so he's using a lower level interface to make it faster i am using this because i found it in the documentation the first time i was trying to to uh i was trying to uh run the tests and i used to it of course and then i i keep those uh those commands in a file so i just uncomment the the one i need and run it so uh you can pass variables also using uh using this command some of them must be passed like distry version flavor arch and uh build uh sub variant desktop and development are good for installation tests so if you have a pre-installed image it's not that important uh but if you want the test to install it so it must know whether it's installing raw height uh if it's being developed because uh the development for example checks that there are some parts present during the installation like the pre-installation warning or pre-release warning and if it's not there so it uh pretends that no pre-release warning is shown but the pre-release warning is shown and then uh the the test fails so these variables should be passed and then it is being scheduled and you can see it on the all tests page so i will uh you can also see uh the tests that has that have already run whether they passed which is green whether they soft failed which is yellow or whether they failed which is red and uh you can click on the dot to explicitly see the details of the tests and then you for example see that uh this is probably from the it looks like the kd start stop test suite so the abrt started with some hiccup aggregator started and finished okay and so on uh test needle uh when you click on on the on the icon of the image you will see uh the screen that was recorded and uh you will be able to see the area that was compared and if it is green it was found and uh candidate needles and tags tells you this was 100 k-mail runs and some number which means this image from the virtual machine resembled 100 what we expected so that's fine uh normally if you don't do anything so by default uh it tolerates 4 percent so when 96 percent is still there so the needle is taken as past if it's uh less than 96 percent the needle is considered not found and uh you can of course lower the bar a little bit and make it 90 percent or you could set it higher it depends what you need and uh if there is an error it's marked uh in red and there is basically two red fields for every error and mostly uh the error in one test there is just one error because then the test finishes so but there are two places two screens the first one is showing you where the needle was expected and not found and the next one uh gives you some information that information uh is a generic one for like this test died no candidate needle and sometimes you could define your own strings so you could basically get quite a nice information about uh what happened in the in the test and I Tim I was thinking whether if we actually put some effort into describing those failures a little bit more exact whether that could be used to train the artificial intelligence to make the prediction a little bit better I don't know uh if the test failed you can restart it from the web UI uh that's the test detail page all the time so there is a restart button and uh you can stop it when it's running by pressing the stop button or you can just restart it when it's still running dealing with needles uh so you add a missing needle using the needle editor that is part of the web UI you can define the area you see that the green area here so it's the needle defined for the P button on the calculator uh you can name it and check the name or select the name it's above this not part of this and in the uh upper right corner you can change the match level so you can make it less than 96 or more than 96 so uh it's good that if you don't want to deal with the needles elsewhere so you can just write the test without the needles and you can create the needles with the first run of the test by using the developer mode that could be also switched on in the web UI uh I am going to show it to you when the test runs uh I am sometimes using the needle application that uh I sort of uh wrote when uh when I thought that uh one totally needs a needle editor that runs offline and uh it's quite good uh nowadays because uh it can connect to a virtual machine so when you develop a test and you make you make it work you make it manually you try it manually in a virtual machine so you could use the needle application to take screenshots out of the virtual machine and you can create the needles offline without having to run open qa so you can save some time because uh creating needles in open qa is a little bit uh little bit slow let's say and it starts to be tedious if there are lots of needles because you need to open the test editor you uh edit the needle you save it then you have to click on uh come back then it waits for some time then you continue the test and it accepts the needle and it fails with another one and then you repeat the procedure so when you feel like that it's good to do this while developing the test and then you load everything into open qa and you only fix uh what what doesn't work uh you can okay that's not important but uh you could use the chance re application which is a very simple editor that i also uh wrote uh as a as a sort of exercise in python and uh this is good because it has pre loaded test api routines so you don't need to go through the test api but you just select what you want and it will give you the snippet in it so uh it can help but once you know the routines actually you don't need it anymore because then you just type the routine and yeah uh okay so uh integration with fedora of course everything is based on fedora everything is supported in fedora everything is installable uh from fedora uh we use it on a daily basis so the fedora testing stack is up to date and should be working we don't have breakdowns much because uh our procedures don't allow to merge something that has not been reviewed and adam is a strict review reviewer so uh his hawk eye will not let any problem pass into the into the production repository so uh if you want to test something on fedora using open qa it's very easy to do so and uh as i said you can install it you can set it up uh in 20 minutes uh there are some sources here that you can follow this is uh just for the sake of the of the recording uh or uh i will upload the presentation on the scad so then uh you can take it and uh you can see that there is the open qa documentation uh that is uh uh that is maintained by suzer also the test study documentation we have the fedora open qa install guide that this talk is based upon so when you open that link you will have a step by step how to install it and uh the open qa outer instance repository that is used to load to hold the tests and hold the needles is uh on pegger thank you for attention but we still have not have the 90 minutes i believe so let me show you how the test really runs inside of the open qa uh yeah so first uh i need to show you the farlib open qa factory htd directory where the images are placed uh you can see that uh there are lots of images lots of qcau images starting with a number and then this workstation live iso 64 bit dot qcau 2 so remember the product flavor variables that were in the test suite definition so this is the disk workstation live iso 64 bit that's the part and each test will create uh its own image and the the number is the test number to which the image belongs actually when it's there you could actually repeat the tests again and again and again because they have something they called an asset to start with once you delete uh the underlying image you can't restart the test anymore that happens to me on production where i could come back i would like to come back a couple of days and try but uh it's already deleted so it doesn't work but on the local machine until you manually delete those assets you can still repeat those tests uh so you can see that there is the workstation dot qcau 2 which will be used as a starting image it's a pre-installed image so i don't need to do the installation test before because we don't want to waste time on 15 minutes fedora iso installation uh you can say you can see that uh the uh it's owned by gico test uh here you can make an exception you can either make it worldwide readable also that will work too but uh but i do it uh i always change the permission to to be uh gico test because i think it's uh it's cleaner that way but uh here in this directory it's not that important uh okay then uh i will go to cd open qa where i have the run test script uh you can see that there are lots of them those commands commented uh out and uh i can select what i need and uh so it's easy to start it this way i like it sort of and i am going to start well i realized actually that uh it totally doesn't matter what is in the iso variable if you use a qcau image for the test oh yeah you could also you could also leave it out uh in this case so uh let's ignore the iso variable it's distry is fedora version is rohite flavor is flock uh arch is x86 64 build is calculator test sub variant is workstation and desktop is gnome to make it safe but i don't think we are going to need the gnome or workstation uh variables because we are not installing stuff so uh uh i will run it now uh now it tells me that uh one test started or one test suite started zero failed the id number is 4158 it starts with zero so you know how many tests i have run on this installation particularly and uh the product id is 152 that's not important and now when i go to all tests i can see the calculator test running zero percent but uh it the yeah the progress bar is a little bit slow and it changes in steps when i click on it i get the live stream of what the test is currently doing so you can basically uh watch if it does what you want it to do here you can have the developer mode so by clicking on it you can switch it on and you confirm to control the test and now here you have fail on mismatch as usual which means if the needle is there fail and if you leave it like this it's like with the development mode off so you can still have it on but it doesn't affect anything when you change it to assert screen time out then any time a needle is not found it will give you the the uh opportunity to open the editor and create it so this is how you can create needles when the test runs and uh we are not using it in our test but sometimes you can use check screen instead of assert screen which returns a true value if the needle has been found and uh that's a specific thing and uh you can also make the development mode control the check needles the problem is if the check needle for example is not there on a purpose and you switch it on then it will complain and it will force you to create the needle once you forget that it's not it's on purpose not there and you create it then you will get some troubles later so um yeah now you see that the calculator in the mean that the calculator test has ended in the meantime and it has passed which is great uh in the on the detail page i can see what the test actually was sometimes uh if you forget to sing the repo you might be testing old tests uh i am not developing in the open qa repository because it's uh that requires everything to be typed with sudo so i'm developing a side and sometimes i forget to push or pull and then it does the same uh thing that should have already been corrected so you can check that the test is what it should be because you have the test script here uh you can see what steps were taken and what needles were compared so for example this one makes sure that the calculator has started and it's 100 which is great uh sometimes when the GUI changes so it might be zero percent and then you need to recreate the needles then it checks for the button five again it's 100 percent and so on and so on uh you could see the variables that it uses to run the test so sometimes when you have a fedora stack running and the tests start failing uh in big numbers then probably something is wrong with the variables so you can compare the production variables and your own variables and you can set the variables correctly and then that works like magic and suddenly the tests start working uh you can also take a look on the auto inst log txt which is very important for failures because it basically tells you everything what happens uh during the test run uh you can see that uh you have the blue line here which basically is the graphical weight login starts here and this is what happens so uh it wanted to check for the needle must match login screen and at first it didn't find it it didn't find it it didn't find it and didn't find it then login screen timed out and it wasn't found but because it's it's check screen so it was probably correct then it wanted to assert the screen login screen and then it sort of found it after 10 seconds approximately and continued and continued uh you could also use the dyke routine to put or print out messages to this auto inst log txt file so that might also be the way how you increase the readability of of the log files if you need uh you can leave comments here of course in production you can put the bug number here and then it shows that the bug has been already created so uh on production you might see that there are little bugs symbols just below you know or next to next to a fail test and you know somebody has already created the bug and uh yeah that's it also what is interesting you get a nice video of the process so you can yeah it's quite it's quite fast the video but it can be slowed down a bit using the using the firefox menu and you can set the speed to zero five it still is quite fast in zero five but but at least you can see you can stop it also but that is very difficult to find the correct place but yeah it can be helpful too so i think this is this is it and uh if you have questions you can ask um with all that talk about needles um i'm missing either thread or a haystack or to sum it up why is it called needle i don't know to be to be sincere maybe it's because of the haystack maybe because it finds it finds uh small portions of a picture in a big image maybe it's because of that well sometimes you do you know um there is a strategy how you might find a needle in a haystack well in um to carry on that analogy um usually it's uh you are looking for a needle in a haystack so the haystack is given the needle is given so you do the searching what do you use for that a magnifying glass or whatever or a magnet but what you're doing here is you define the needle uh which is put into the haystack and then you check whether the needle is there it's it's a weird it's a weird terminology i might the haystack basically as i understand it is the png file the needle is the little portion and both are there just uh the one you expect might not be there but since you define the needle you can make it as big as you like it could be the entire picture true and then it's not a needle anymore is it and it's a hammer or something more okay um okay that's a that's a very good point which gives me which reminds me to tell you this uh the size of the area actually matters because the bigger it is uh the more problematic it will be to find and uh sometimes where do we have a tiny tiny pixel glitch in the image of the virtual machine so it won't be found if it's too big you will and you expect 96% you will have troubles so the best strategy is to keep the needles as small as possible you can if you need to check for more you can define more areas inside of the one needle but the smaller they are the better for you on the on the other hand of course I once experienced a case when I wanted to check whether the button was lit or not so basically it looked the same it just would be it would be a little a little bit blacker or with darker shade went off and with a lighter shade went on and uh the classical 96% wouldn't be able to cover that because in both cases it would be above 96 so it wouldn't be able to differentiate between off and on and I had to explicitly make it 100% and only then it was able to distinguish between the states uh so sometimes it's like a it's a funny play with those needles it was my first comment was of course more like a joke and you know trying to get my question in there I totally understand what it's doing and it's very useful the way it works it's just sometimes you ask yourself these questions like how did they come up with that name and why did they use that terminology it doesn't make sense to me well it doesn't make sense to me how a chameleon could look at one side and another side at the same time you know so is does it support accessibility testing accessibility testing uh like uh like like this for example or which accessibility do you mean so you the the calculated test that you ran if I do it on a high contrast or say a color change mode can I reuse the test or you can reuse the test but you have to recreate the needles so you could have basically a couple of sets of needles and you could for example you could use the calculator test on high contrast normal contrast large test text you know and uh it it would work so this is interesting because this is why the needles have the tag concept so you would have the exact same test logic and you just have three different needles which have the same tag which all match on the same tag so we use this a lot we have lots of cases where we have you know different needles that match on the same tag because of various conditions making the screen look different so that's how you would do that and basically just to add to Adam if you run a test the calculator test and you have the needles to create you have created the needles to support the high contrast for example or the normal contrast so it doesn't matter actually which one of the tests will run both would pass because the needles are there already so you don't have to tell it use the high contrast needles so if the test is high contrast and the needles are there along or next to the normal contrast needles the test will pass okay i have a question about let's say the target audience for this so who should be most interested to look into this let's say if i am a package manager package package in fedora and i have some graphical application in fedora should i try to install this and create a test then submit it as a pull request for you or is it targeted at me or or not well i would like to say yes but the question is do we have space for it do we have resources for it i believe if that's a single test for example a single application is tested we would have resources to do it if it's like a hundred applications maybe we don't have the resources because the time to run all the test is sparse and so basically if you are a packageer who develops a fedora application that is part of the installation or heavily used in fedora and so then i think it's good to make a pull request and write a test or make a pull request and we could take it to our stack and test it right yeah there are different directions someone could go with this as lukas says if you want to get a test into the official repositories before putting too much work into it it's probably best to file an issue and we can discuss whether that's a test that we would want to carry in the officials but you can also just stand up your own open qa instance as lukas has explained and use it you can do this kind of permanently there are i think there are cases of people doing that there are other projects which are sort of open qa like gnome and debby and stuff so that's another way you could go with it but yeah we do have resource constraints on the official instance so we have to kind of i have for example talked to some redhead teams about open qa and they said oh you know it looks great it looks great but it's too complicated to maintain it well actually i don't think it's too complicated to maintain a local instance because it works it must work it works for us all the time so the basic basic thing is to install it and run it that's it you know nothing to maintain because it's maintained by adam it does take you your 20 minutes to get your initial instance running but then after that it will kind of sit there and work like you can have your pet instance sitting there not use it for six months and if you come back and update your system and try and run a test it'll probably be okay yeah it's not a lot of ongoing maintenance involved it's kind of initial setup but once you've done it once and figured out how to write a test and add it into the template it all gets a lot less overwhelming so it's a kind of a little bit of an initial setup and then a sort of plateau of and there is one problem that could actually arise after an upgrade and that's the postgres server because sometimes it gets updated and you need to update the database running the specific command if you don't do it it won't start and then open qa won't start but it's it's probably yeah once or twice maybe yeah and okay one more okay so if i understand correctly i guess this is most interesting to some teams that could be working on some bigger projects or some maintainers of some high profile applications for example like library office or something that yeah would be interested to be pushed into the like federal production instance or perhaps some passionate maintainers that want to run their own local instance is it correct or by anyone in the community who wants to help us are creating the tests for fedora stack thank you very much awesome thank you everyone so much that concludes today if you haven't already gotten your flock badge please scan the qr code in the back of the room and otherwise we'll pick up here tomorrow uh for day three thank you all so much