 Thanks for coming in for the State of FedoraQ. It's literally a new day for me. It's about two o'clock in India right now. So I'm going to talk a little bit about numbers, what FedoraQA does, and more importantly, I'm going to talk vividly about what's new in Fedora, more like what's coming in FedoraQA as for community. Right now, FedoraQA has always been this team which has been the prime spot of a lot of contributors who have tried to be in Fedora and who have tried to be either being a user at Fedora who started contributing either by filing bugs or they started contributing by, let's say, writing some test case or running some test case for a package or something like that. A little bit about myself, I'm Shimantro. I go by Shimantro M on a lot of channels, mostly FedoraQA, and today the talk is going to be specifically about what are the new things that we are bringing into FedoraQA, a bit about recognition models and a little bit about how we are going to plan things through F37 cycles. If somebody is new here, this slide is obviously going to give them some pointers around how you get started and how you can basically go about doing some Q&A. Now, moving on, who are we? We are basically bugs washers. We do a lot of other things as well. One of them includes a set of things includes hosting test days, both the app and infrastructure as well as basically hosting test days for multiple SIGs, teams, working groups, in collaboration with almost everything. We do a lot of release validation events and release validations are these some post-testing events that we do for raw high nightlies, branched and beta and definitely fine. These are also exactly where we find the blocker bugs which then go around in a block and release. Other than that, we maintain this automation system called OpenQA, Adam Wiedemsen does it with Vukash Vajaskar in our team. We also help a lot of other onboard, a lot of update testers who are basically set of people who would go around testing for package updates to a stable operating system and package updates for specifically the upcoming release. F37 would have both the starting point and at that point we are going to basically have updates for packages which are then going to get tested. Those are the very specific things we do. Other than that, we also are very involved when it comes to development of well-valued tools. The release validation tools around Fedora are also maintained by us. In our team, there is Lily who has created this thing called Moonlight. It's an automation system again which tests for decode machines. Some of our team members, they maintain package or dashboard, EMT and a lot more. These are set of tools that are maintained by Fedora QA team and we actively look out for contributors and that brings me to the last point of it. We explicitly try to onboard new contributors both via email lists as well as we go to onboarding calls and mostly onboarding. That's a big description about us. The QR code, if scanned, would point you to the joint page of Fedora which then can read through if you are new and then you can understand some of these terms better. The DCMS, yes, we have been trying to... There's an active effort to basically just deprecate Wikipedia CMS and set up a test instance. We have tried or we are currently trying QEDC CMS and that's something that we have been experimenting. These are the few things that we have been done. Now, having said that, what is interesting is as a state of Fedora QA, I wanted to cover what we have achieved and what our highlight. That brings me to the next part of the thing which is since F35, more like F34, we started having these 12-ish odd test weeks every cycle and they include primarily the fixed test weeks which are kernels, Fedora IoT, Fedora CoreOS, GNOME, I18N. These are our fixed candidates and we pick up chain sets and we learn. Basically, we just go around running test weeks. We have started putting in something called Fedora raw height test days which I'm going to talk about a little bit after in the slides. We kind of have in the last few releases we have hardened and expanded the release criteria. Two world monitors release criteria, GNOME default apps release criteria come in has actually helped a lot to expand those criteria and we have achieved a lot better locker status or rather very fine tuned locker status every time we have gone about changing those. We have successfully added IoT test metrics and then we have gone ahead and we have added or rather run a bunch of CoreOS test days and if things goes fine, CoreOS is going to be your addition this time. That's something that we are very proud of as highlights of the Q&A. We have really helped a lot, really helped our community to basically test a lot of these things. However, CoreOS has a fantastic automation that they run on Koala. Dusty might be somewhere there if somebody wants to contribute to that piece of software or rather that piece of test case management system. Now, when I look at our highlights, one of the things that comes almost instantly to me is the amount of GNOME apps that has changed over release cycles we have maintained or rather worked closely with GNOME working group, workstation working group to make sure that these test cases are maintained actively. Some of these get bit fraught and we kind of take some effort, it's still in progress. It takes some effort to maintain these test cases, keep them as much up to date as possible and more like flesh out these test cases to make sure things work. One more shout out for us is we made one of our team members, Jeff Maher, has actually created a lot of framework laptops. He has made the integration and rather to run on framework laptop very successfully. There's a talk that he is presenting. I don't know the timings if you happen to be interested in framework and that's a talk you might want to see. That's kind of the highlight that we have for what we do in the last few releases. Now, mostly that the state of Fedora usually comes to is, okay, all of these are great but what's next and one of the things that strike up to us very indefinitely is something that we have been trying to do for the last two releases which is we kind of wanted to basically have all the community members test a specific compose during the beta and the final in a more holistic way. The way by which the entire, the folks across the globe would be able to participate in some sort of event and that's what we are going to basically call as release validation. The QR code usually will take you to a ticket which on Pagyaar Fedora QR Pagyaar which would then define what we are trying to do with these events. However, it's not actually implemented but we are planning to implement it from this release cycle. This release cycle will have a beta validation event and a final validation event. These events are mostly going to help our community grow and sustain. The other initiative that we are trying to go forward is something that I am more interested in leading is a test case sample and the way I want to put this is a lot of the new QA members when they join Fedora QA they usually are either asked to test updates or they are asked to test release validations and release validations for somebody who is new to Linux becomes typically very hard because it involves someone to basically understand all the test cases of the particular bumpers they are trying to test and then run through everything and then that becomes a little bit of a challenge for newcomers. There is a way that I have kind of put through. The mechanism is simple which is we would go around writing as much as package test cases as possible. The QR code currently will point you to a HackMD which has a bunch of test cases for package. If you click on those test cases you will find a set of instructions or a new contributor will find some instructions which they then can run for those packages and basically give a karma to those packages if they are using the latest version. With time we want to basically ensure that more test cases are added to this sandbox so that whenever a new community member is onboarded they basically just go look at all the test cases and basically keep posting thermals and testing some basic functionality of whatever package that they are interested in. Now how to write test cases? That's one thing that I am more interested in right now as we are looking for a new TCMS and stuff like that we still want to make sure that there are ways by which we let people know that there are a bunch of test cases that needs to be written. There are certain things that people are more interested to know and that's the third effort we want to need from the QA team which is we want to get a general onboarding call that we anyway do for today. We want to expand that to basically have three specific type of call. One which is collaboration with Fedora Joint SIG and that's going to be a classroom for things like bugzilla and open QA and stuff like that which are very niche topics should not be more than 30 minutes and should basically cover a very specific curious audience. The second thing is we want to make sure that every contributor as they go ahead consuming the test cases or the sandbox cases as I call it they should be able to at a point write this test case themselves and then send it off to the test list for verification or rather make sure that they perma these packages or rather they add these test cases to the sandbox at which point any new contributor can find those packages and then start queuing. The way we want to do it is we want to make sure there's a specialized onboarding that happens and in that case I would run through multiple set of packages some system level some or some of these packages which are tools like for example toolbox podman generate ish test cases for anybody to write and then we can either put that as part of test day we can put that as a part of a package and that should be something that we are looking at. These are the three new initiatives or the three new moves that we are planning as a part of the 36 7 to 38 and 39 cycle that's mostly in the if you are somebody who is interested and if you are if you think this is something that you want to do there are certain things that you can start doing today which would take you one step closer to participating in the state of fedora queuing and the first thing is we are trying to actively tell our contributors to go back to join sick and help new contributors join fedora queuing or even that join sick is a very nice platform for anybody to reach out and basically get guidance and help it also is important that if you are already somebody in queuing please go ahead and help some join sick members or anybody whom you find in join sick who is interested to basically join fedora queuing your experience of fedora queuing that would help them go a long way you know as we go ahead that's one thing we want to give back as much as we can and the way that we start doing that is reaching out to fedora join sick the mailing list and the IRP and see what can be done until now we have been very successful at doing that we have been able to take up new contributors from join sick convert them into a long term fedora queuing contributor that has really worked well with us and the way that we have currently been focusing on doing that is using this thing called test our test days are this one day events that we usually do currently we have about 12 to 14 ish test days every 6 months we cycle they are extremely beginner friendly and most important part of the test days is not just participating but also hosting testing you don't have to be a fedora queuing member to host a test day in the last release the wallpaper team basically requested fedora queuing a file of fedora queuing a ticket that they wanted to do a test day and you know when that test day happened and a few bugs were filed and they got fixed it's a very nice mechanism that we have put in place if you are somebody who wants to work closely with you the state of affairs is to score ahead, file a ticket and host your own test that's one of the biggest advantages there is other than just participating in a test currently if you look at the keyword tracker there are new test days that are being tested tickets which are being filed by folks, known folks mostly there are tickets of crypto policy changes and just informs file tickets for kernel test that's the state of affairs when it comes to participating in test days both hosting and taking part the last part is which is a bit tough for people to get to is basically using this thing called release validation testing and test case rate and the way release validation testing usually is heavy on new people is because we have usually a lot of the test cases and they are matched with a lot of criterion if you are really new some of these test cases might look too heavy some of them have a criterion based on it so if it fails it becomes a blocker and you have to file a blocker bug which has some for which you need to have some level of bugzilla knowledge and that's one thing we want to cater this time we want to have the onboarding calls cater to these specific set of use cases so that we can go ahead and help we do not have specific onboarding calls for writing test cases or we don't have specific onboarding calls which talks about just bugzilla so we would probably want to have classrooms where we talk about bugzilla and ticket filing and how to file good bugs kind of sessions which are then encouraged contributors to basically take part in activities more moving on a lot of people in the last few releases have complained or rather showed explicit interest in making sure that they get some recognition some form of recognition either in terms of badges or in terms of swag the way that it has been usually working is we usually have a badges design and that gets pushed and then that gets allotted manually that's a lot of work but currently badges is broken so a lot of people might have not gotten badges for testing kernels or editing weekies and stuff like that but from here on we want to basically just go ahead and talk about couple of things that we have in plan to introduce from this cycle onwards or at least attempt to introduce from this cycle onwards and that is proposing a special badge for authoring a test case five test cases, ten test cases 15, 20, 25, 50 and so on same for test days participated in one test day three test days, five test days 12, 15, 16 so on if you are somebody who is hosting last room where you are teaching let's say Open QA or you are teaching Godzilla or you are teaching a debugging concept like dog tail for gnome I think there should be a badge for that so that's in works today I was talking to Wipple and since we are talking badges I was talking to Wipple and Wipple has opened up a discussion ticket up there if you are somebody who is interested in development of badges and you want to see all these badges come through make sure you go ahead and visit this red, put your inputs and that should make a lot of case for us to go ahead and implement some of these as fast as we can having said that as a state of federal QA we resort back to another way that we could potentially reward candidates and that's by giving some swag at this point QA team has not tried to approach mindshare for swag but I think there is a room for us to go ahead and ask for some swag of course the help of design team and mindshare would actually get some specific swag to be shipped to people who might have participated in more than like 10 test days or 5 test days in the last release cycle or coming release cycle however at this point I would be open to more suggestions as to what do think or the community think should be a good routing mechanism or good recognition mechanism because as we go on and the community grows QA is a lot based on trust and as the trust grows we have to ensure that we make sure we don't burn out our contributors which also includes being very respectful for the contributors of how they spend their time in QA so we would love to explore recognition opportunities and love to work with anybody who has any ideas what to give out as potential recognition models so I would say that we have that's the that's in a nutshell about what Fedora QA's data for first looks like so let's make Fedora better and yeah that's one of my favorite quotes of all time all code is guilty until proven innocent I hope you guys have somewhat idea of what we are going to do as a Fedora QA and I hope you guys have you know enjoyed the presentation in the next 6-4 minutes I will be taking questions if anybody has and I will be stopping the slide okay then has a question which is okay I should probably start reading from you plan to cover silver blue as well as well as in the test week so yes we had silver blue test weeks until 34 so I would want to see your night and silver blue test days I have been working with Deborshi recently very much so I would love to see more test cases come out and I would love to see RPMOS free getting tested in more depth as we go on RPMOS free is the same thing CoreOS is built on and same thing that IOT is built on so we would love to see that being covered yes you know talk to me or file a ticket I would love to work with you to cover test days do you see the room for unit testing in Fedora in future okay so this test days or rather the testing that I am talking about these testings are mostly from the user point of view they are recreational they are functional they are end to end we do not cater to unit testing if that answers the but yes if you wanted to you could basically still probably look at your find some project and then start writing some unit test cases to have knowledge of but I am not kind of this is user level saying this is not unit testing then question how can we help upstream projects to more testing so that box can be for for the land and for okay so then that's a nice question here's what we do we try to pull point of having raw height test days going forward basically to have these things tested during raw height to currently go ahead and look at our tracker I actually plan to post a test week on board manager and board man and toolbox stuff like this as a part of raw height where we would probably be taking the latest and greatest of whatever there is in the upstream side of things and make sure that they get tested before the land in before that but if you are specifically talking about more upstream projects I think increasing QA mind share in multiple projects is a very, very nice way to do that I know Adam and Kamil file a lot of bugs for gnomes upstream and I know that we have been extremely successful with that before having a mind share upstream projects helps a lot and that is something that we probably can work with to make sure that we work closely with the SIGs or working closely with the working groups or in fact the entire project that we can probably just work very closely with in this case I tried a lot with Rust we had Rust test days before a lot of Rust toolchain Rust plugins I ran a bunch of Java test days as well back in the day when Yeery was maintainer for Java and I don't know if he still maintains it but we ran a lot of test days back in the day for making sure the bugs are caught or I think that answers it