 the past years we've been working on goals and part of Academy has become to look at what has been happening around the goals that we voted on. Thank you. And that's what we're going to do now and we're going to start with Nate to talk about the automated and systemization goal. So please approach from Nate. Hello everybody, welcome to the panel. So today I'm going to talk about the automation and systematization goal. This is a goal that was chosen last year so this is going to be kind of a mid-cycle update. So let's get into it. First let's start with a little bit of background about KDE and why the structure of KDE makes this goal important. All of us in KDE are aware that KDE is a multi- generational organization and KDE's contributors have a defined life cycle as well as different stages that we often move through. At the beginning we have people who start out as students or young hackers in university. These people in this group are marked by having lots of time, lots of passion, lots and lots of contributions to KDE. Then time marches on as it always does. People become young single professionals and their KDE time becomes hobby time. At this point in time people have jobs and jobs take up more time and as a result KDE contributions tend to fall off a little bit. Then time continues its vicious cycle and people become professionals established in their careers. They have families and KDE time often drops off quite a bit and then finally if all goes well people retire they become financially successful they have time for KDE again. But one thing you can see with all of these different groups that people can fall into is that the amount of time that we have for KDE changes. That means there's a lot of turnover in KDE. One thing we're really good at is making sure that people cycle in and out and learn from each other but it's really important that we keep the knowledge that people bring into KDE and it must stay in KDE. People bring knowledge all the time. They work on cool stuff. Some of that knowledge gets passed on to other people and then we learn from each other. I think all of us have had that experience. Some of that becomes embedded in technical processes that we work on and that we contribute to and unfortunately some of that knowledge just gets lost when people leave KDE and that's the thing that we really want to try to avoid and that's the thrust of this goal is how can we minimize that knowledge leakage when people inevitably leave KDE either temporarily because they've moved on to a new phase of their life or permanently for whatever reason which is fine because people come and go but we want to make sure that their knowledge stays within KDE. There are many different types of knowledge that gets lost so let's go over that a little bit. The first type of leaky knowledge is processes that are done by hand and generally not documented. These are things we really want to avoid. The next is when people have personal tools that they write for themselves to take care of certain things but then they don't share them publicly. When those people go away the personal tools are essentially lost as well. Next we have public tools which is better. Public tools are better than private tools but the public tools have to be documented or else nobody knows how to use them when people go away and then they get rewritten because it's easier to rewrite than to understand and also these public tools sometimes are not run automatically for automatic periodic processes and that's important to make sure that people retain knowledge of how to use them. Finally we have, not finally, second to finally, we have knowledge that is gained alone and not shared. This happens when we learn something but we don't talk to other people about it. Talking to other people is really important and documentation also has to be kept up to date. When it's not kept up to date it's not useful. All of this leads to the very familiar feeling of if I stop doing this it won't get done and then you feel like you have to keep doing it and if you don't everything you're working on will end up as an agent ruin like this so that's not a good thing. We want to avoid that feeling. The basic problem here is that working alone sucks because you end up doing the same work that other people have done before. You end up fixing the bugs that other people have fixed in the past. You end up getting your merge request nitpicked to death with style comments because people have different opinions on what should or shouldn't be there. You end up talking to users about the exact same problems that you fixed over and over and over and over again. You end up triaging the same bugs over and over again and eventually when you decide to go on vacation nobody takes over what you were working on and so it eventually gets dropped on the floor. These are things that are very unpleasant. They tend to lead to people burning out, leaving KDE, not having an enjoyable time and we want for those things not to happen. So you don't turn it to this guy and destroy your computer because then you really can't work on KDE stuff. So the solution is to externalize your knowledge to get it out of your own head. It's really important for the thrust of this goal that we be scripting our tasks and that we have those scripts especially if they're for periodic tasks that they get run automatically rather than manually. They need to be documented so other people can run them too so that if one person is gone then another person can take over without missing a beat. We want to make sure that if people are doing similar tasks that we consolidate the tooling that they're using so that each person isn't having say personal scripts that they're running that only works for them and then it only works for their process and another person does a different thing. Not great. We need to collaborate on that. We need to make sure that we're keeping up to date on test cases because it sucks to fix a bug that's been fixed over and over again. I think most of us have had this experience. It's not fun. It's easier to write a test case and then it won't happen again. We want to make sure that our code is well commented. We want the comments to be good. We want the comments to explain the why not the what. The what is usually pretty obvious. The why often is not obvious. Git history I will mention real fast is not a substitute for code comments because you don't see the git history at the moment when you are reading the code. Depends on your client, right? Depends on your tools. So if you're using Kate, which all of you should, then Kate has an amazing plug-in that can show you the very last comment, the very last commit that touched a particular line but all of that is a more indirect process than just seeing a comment right there in the code that explains what's going on. It can even reference the git history and you can go there for extra information but it is not a substitute because otherwise some well-meaning do-gooder who does not have an editor set up the way yours is will look at this code and they'll say, why did somebody write it this way? I'll just rewrite it. And then a whole rabbit hole gets gone down and it wastes everybody's time. We should also be making sure that stylistic stuff is done with auto tests and with CI so that we're not endlessly arguing over whether there should be semicolons at the end of lines for QML JavaScript code. This is a waste of everybody's time. It doesn't matter. Yes, no, who cares. Let's just standardize it. We should have bot's triage bugs as much as possible because bug triage takes forever and it is not a fun thing to do. A lot of this stuff can be automated by bots and we also have to make sure that our documentation remains up to date because we're actually using it. That's the best way to make sure it happens. If we're not reading our documentation, we won't notice problems and we won't see that it needs to be fixed. So when it comes to what's happened over the last year, a lot of really cool things have gotten done. We've had 12 months. That's pretty good. I want to go over real fast some of the things that we've managed to accomplish. We've added tons more auto tests everywhere. Personally, I'm involved in the plasma project. I've seen a lot there. There are a number of apps where people have added helpful auto tests. This is a really useful thing. We've gotten a whole new testing framework using a system called Selenium that allows us to do user interface testing. Selenium is really cool because it also goes through the accessibility API. So in order to make it work in the first place, you have to have adequate accessibility support in your software. So this gets to two goals at once because at the same time that you improve accessibility, you make your software more testable. When you do test your software, you're making sure that the accessibility code is being exercised and that it doesn't bit rot over time because if it does, you'll notice it. And that's really great. We have for the KDESRC build script that many of us use for compiling software. We now have a dependency regeneration tool that automatically makes sure that the dependencies for each KDE repo is up to date. That's really great. We have tooling for updating apps on the Microsoft Store, which has been written over the last year. That's some really amazing stuff. We have an increasingly large set of changes to make tests mandatory to pass on your software so that you can't press the merge button if the tests are failing. This is excellent. We're not 100% there yet, but anything is better than the 0%. We had a couple of years ago. We have a bugzilla bot now. And the bugzilla bot takes care of various bug triage tasks. Simple things at the moment saying things like, hey, you're using a version of the software that's too old. Report it to your distro. Things like that. We have updated a ton of outdated documentation over time to make it useful so that people can actually start using it. We also have continuous integration jobs to build flat pack bundles for many apps, which makes them easier to test. And also allows us to see when that process breaks. So now it doesn't break as often, which is great. We have CI jobs to enforce code formatting in C++ in some projects. This is something that we have in some repos and not all repos, but it's really cool stuff. And it has saved us from a ton of arguing over code style and merge requests, which saves everybody's time for more useful things. We also have a CI job to validate JSON files now that everything has been ported to JSON, which is used to auto generate desktop files. So now you don't have the experience of accidentally breaking your JSON file right after you hit the merge button, which is no fun, and waste everybody's time. So you can see a theme here, which is let's not waste everybody's time. Finally, we also have a hook script to prevent you from changing translated text in Git. Now the translations live in Git repos. We were seeing a bunch of people saying, ooh, this is great. This means I can now do translation work in Git. Can't do it. So now we have a thing that tells you so that we don't have people explaining that over and over again, which was no fun. There are many ways, if any of this sounds interesting, that you folks can help to do this. One of the biggest ones is to write more selenium UI tests for apps that have the framework set up. This directly benefits multiple goals. As I mentioned earlier, it also helps to test the accessibility stuff. And if your app does not currently have selenium set up, set it up. It's really cool. There is a wiki page that explains how to do that. Some of this is linked to in the goals wiki page, which I'll get to at the very end. Another thing you can do, which is a very, in some cases, it's sort of a low hanging fruit, is to make tests mandatory to pass before merging. If your project is in the fortunate state that all of its auto tests already pass, fantastic. Immediately go turn on the thing that says that they have to keep passing. So then they won't just start failing at some point in the future, because that will happen. We also want to make sure that we are adopting much more widely the code formatting stuff so that we don't have as much arguing and merge requests about that. That's another small thing that is relatively easy to do from a technical standpoint. From a sociological standpoint, it's harder. But you can say, Nate told you to just do it. And then people yell at me and not you. And that'll be easier. We have this KDSRC depregeneration script that I mentioned earlier. That's something that could be automated so that it runs periodically. And then Nicholas over there doesn't need to manually run it once a week, which I assume is a waste of his time. We also have the bugzilla bot. The bugzilla bot is written in Ruby. It's rather approachable. We can make this smarter so that it's doing more of our work for us and we have less bug triage to do. Basically be lazier. I want all of you to go out and be much lazier so you have less busy work to do. Because in the process of being lazier, you're also helping to take the amazing knowledge that's in all of your heads and put it into KDE, where it can benefit from everybody and can even benefit you if you happen to leave and then come back later. Because all of that won't be lost. There are also some even bigger ideas that I've got for this goal that have been worked on a little bit here and there, but really need the helping hand of technical experts to make this stuff possible. Basically everybody I look at in this room is much smarter than me. So I'm looking at a whole room full of technical experts. I think that any of you folks, if you want to work on any of these things, this would be a fantastic way to help the goal. We've got tasks like consolidating release tooling. I think it's not lost on everybody that we have many different release vehicles. We have gear, we have frameworks, we have plasma, we have individual things with extra gear. If there would be a way for us to consolidate our release tooling so that it's something that can be run automatically and it's something that can work for all different release vehicles, that would be fantastic because it would make it much easier for people to be release managers and not so much of the work would have to be born on one particular person so that that person feels stressed out if they happen to be on vacation when release day arrives, etc. There's also this other moonshot idea of using AI to triage bug reports. People keep asking, can you integrate ChatGPT and KDE? Can you integrate ChatGPT and KDE? And this is how we do it, in my opinion. We have a robot triage bug reports because this is something that robots can potentially be good at that none of us like to do. So let's do that if possible. This is something, the next thing is something that also helps for onboarding. If we could have KDE SRC build automatically install third-party projects needed to build KDE stuff instead of making people go and do that themselves that would be a huge help that would cut down on an enormous amount of common chatter that people end up having to handle. In the same vein as AI to triage bug reports we could have a chatbot answer common help questions, things like Nvidia drivers and why does this update not work and discover because my distros update policy is completely broken, things like that. I'm sure all of us are very tired of answering these types of questions. I know I am and if there's any way we can have a system do that, that would be much better. There are also ideas for how we could make our icon design pipeline much easier. It's a very manual process right now. It relies on information that is stuck in the heads of several people, some of whom I see in this room and I think if there's a way that we could make this a more programmatic process by say having icons get generated by combining symbols together in a code pipeline rather than making everything manually get done in Inkscape, that would be fantastic. We could automatically generate AppStream release notes from the commit message tags and from GitLab tags. We have all the plumbing needed to make this work. We even have support in AppStream itself which has recently gained support for fetching remote release notes which was the big blocker last time. It's just a matter of wiring it up. Then we don't have to spend so much time manually writing release notes for every single release. This could be a big benefit I think. Then there's also that this week in KDE blog that some guy named Nate writes which for some strange reason is not on KDE infrastructure yet. So maybe he could finally get off his butt and do something like that. I think that would be good for this goal as well. If any of this sounds interesting, you can get involved in a couple ways. I'm the goal champion. You can always contact me. I'm Nate at KDE.org. I'm around all the time. I think you probably know how to contact me. Otherwise there is this KDE.org slash Goals link where you can find information about all the goals including the automation goal. We also have a team on Invent that you can join. There's not much activity there in that particular space right now so you can help change that. We have a Boff session at noon on Tuesday that you can go to if this sounds like an interesting topic. And we also have a sprint planned for some time next year. So keep your eyes and ears peeled for that. And with that I would like to say thank you all for listening and we can move on to the next one. Thank you so much Nate. And we move on to accessibility with Carl. Tell us about accessibility Carl. Hello everyone. I will talk about the second goals about accessibility as we like called KDE for all because like we are inclusive community. We want to have our software to work for everyone. Even people with disabilities who need to use a screen reader or who can't use a mouse or... There's this quote from Tim Banali who is the inventor of the World Wide Web who says what the problem of the web is the universality and access by everyone. Disability is an essential aspect and I think that also applies for KDE. Like why is that important? I mean it's good for us to reach more people. We do that by porting our software to Windows or to make or to Android. Like we don't want to only target Linux users but we should not only target people who are like experts with computers and we can't see the software and it also like benefits everyone. Like even for normal users being able to use the software with the keyboard, like for for other users it's quite important like they can be faster to use the software. Like for example, another aspect is changing the fonts like to increase the size of the fonts. Also like useful for everyone and not only people with reduced visions. And I mean like accessibility includes the usability generally like of the software in general. And another aspect is what accessibility is a requirement for public sectors, organizations and if you want to be disorganization to use the KD software, we should try to ensure what KD software is follows the requirements, follows the accessibility requirements. So, what did we do? Like last year we was since last year like we started testing the software with a screen reader or with a keyboard only to see what can be used with the keyboard or what can be like just like by listening to the Orca, see what or we can use the software currently and that allowed to like find a lot of cases where it was not that great. Like often there was tabbing loops where you just type and you switch cycle between elements but you don't go with the entire apps, which is an issue because then like the screen reader user or just the keyboard user won't be able to navigate the entire applications just with the tabs or the keyboard navigations and we started writing automated tests with Selenium like I already said which is like a really cool framework to write tests and also like ensure that your software is accessible I mean it doesn't ensure it like you still need to work on testing with like a screen reader but at least it helps a lot and yeah we had like multiple season of Calibre projects on that topic as a moderate Ritchie for Tocodon and Joseph did like work on Gekampri with Selenium we also had like some blind users coming to the KDSSVT channel and testing the software for us which was quite great like to have like because like when you can see what the software is it's a bit harder to imagine what the difficulties are but when someone comes and tell you yeah that buttons here you do and then oh actually like the Kigami travel handle to yeah and that's how you detect my functions yeah after testing we started like improving the software there was like many patches for Plasma functions like a good example is Cleopatra Ingo as a talk I think later today in the other room which hopefully like will give you a bit of technical details and how we met Cleopatra good for accessibility we improved the KD frameworks because like if you ensure that the KD frameworks are accessible it's easier to build applications that are also accessible for the common child components we also did some improvement upstream in Qt we send multiple patches to Qt to improve accessibility for for the buttons that press action or the name was missing by default it was easy to fix but by testing the software easy things to fix and there's like a lot of areas where small improvements already make a lot of difference and we also send a few patches to Orca itself like the screen reader for Linux as a non-project so it's good to see like cross the stock collaboration and stuff like that yeah so what did you learn yeah I mean like what on the goal doesn't make it magically happens like we still need to have more people working on accessibility there's also like not that much documentation from the Qt site about accessibility and what's something that we should probably try to improve at the upstream and there's like the documentation for the Q accessible class but there's not a lot of documentation or to use that best way to use that or to make sure your application is good with accessibility and there's like a lot of documentations and blog post about the web accessibility but for Qt there's not that much yeah so future yeah I mean like we should continue working on accessibility it would be great like if you were more people or giants like we wanted to organize a sprint next year with the overgoals and like for me personally like whatever want to do more is what more blog post like more community outreach to make more people aware of the community or to improve accessibility like other documentation and stuff like that I thought it didn't really have a lot of time like it's always like I have like many projects aside from accessibility in KD and so it's great like if more people would join and help with that no a little bit suspicious let me go back yeah I don't know so how can you help like test your applications like install orca and try to use application with orca and see how well it works yeah working documentation is important for to like document the best practices for kids for KD software general for the world like if someone could help me like what blog post and stuff like that I would take the current process of community would be really helpful and like other the tooling like I think Volca worked a few years ago on gamma ray accessibility inspector it would be great to revive this effort to be able to like inspect the accessibility state in gamma ray as well as overstate the visual state and everything yeah like if you want to join there's KD accessibility channel and matrix it's available on ESC we have it both on the first day at noon and yeah shine a screen yeah that's it and that brings us to the third and final goal of sustainability with Joseph let's see presentation there you go apologies for breaking the aesthetics I tried to convert the template to la tech but it wasn't successful I'm actually Cornelius Schumacher he's the champion of this goal I'm here pretending to be him so the usual disclaimer applies any errors in my own not Cornelius's yeah I'm going to present about the sustainable software goal which is part of this sort of larger KD eco initiative which started a couple years ago if you would like the slides are available to download at the sustainable software gold repository on invent I'll come back to this at the end if you want or if you have time now to scan it and I'm going to go over just a couple of the things that have happened since the goal has been adopted and voted for in October last year one of the things is the publication of the KD eco handbook this was the culmination of the work done in the blau angler for false project which ended in March but it also coincides with the sustainable software goal and a lot of the topics addressed here are directly relevant for what the goal has so the handbook if you haven't seen it yet is broken up into three parts currently the first part is meant for a general audience about why is this important how does software influence resource and energy consumption the second part is about the blue angel eco certification criteria and how they align with free and open source software and the third part is how do you fulfill the criteria with detailed instructions about measuring software following the guidelines in the blue angel criteria and then fulfilling the other criteria in it that's the first iteration we want to continue expanding it for a minute the as many of you I'm sure no ocular was eco certified last year with the blue angel currently it's the only software that's certified as resource and energy efficient that eco certification has opened up many new channels to KDE to present the work that we are doing one of them for example is in December the open UK awards in the house of lords in London this was an event organized by the open UK advocacy group which advocates for open tech the host of the event was Francis Maud who's a member of the house of lords and he's the minister who a decade ago created the gov.uk website which provides information about open data open formats and policies regarding that a co-host was KDE's own Jonathan Riddell and he was there to also present what KDE is doing in the KDE eco project so yeah really big channels have opened up to us in this regard another one was Cornelius who presented at the green party event on green digitization not hotting by design this was an event that featured many prominent people like Kari Dr. Rowe gave a speech the Germany's vice chancellor Robert Habek and Cornelius participated as an expert given the blue angel certification of ocular in the right to repair a workshop that was organized and this event and this post was a very popular post it featured in the Hacker news and there's a huge spike in the views in the KDE eco project website after this so it garnered a lot of attention and another project or several projects not just from KDE eco but from the season KDE in general was reported on in Heissa DE which was a big publication for tech news in Germany it featured all of the season of KDE projects but the eco ones were particularly prominent this year we had three projects working on sustainability issues one was this tool that was designed by Emmanuel Charu I'm not sure if I'm pronouncing that correctly called KDE eco test and this tool was is designed to make usage scenario scripting easy and robust so the existing tools that there are have various issues and this is trying to make it so that the process is simple like some of the tools but more robust because it's not based on pixel locations for the emulation but rather working on command line triggering of actions when you're trying to emulate user behavior this tool was quite limited and it's now been expanded to have many more features from Muhammad Ibrahim we had another project which was looking at the documentation for ocular trying to extend it to Kate in particular and that was from Rudrakskarp and then as has been mentioned now by both the other goals the selenium testing so just a small correction I wasn't actually working on the G-Compre testing it was Nitin who did excellent work using selenium to emulate user behavior for G-Compre and as Emmanuel wrote in a blog post this is a project that actually hits all three goals Nate has already talked about accessibility it's also used for user scenario scripting to emulate user behavior so that we can have reproducible energy consumption results for software right now there's a project going on in Google Summer of Code which is trying to make the lab that we set up last year in KDAB Berlin to measure the energy consumption of software accessible remotely so all this outreach is great but the actual reason we're doing this is because we want to measure energy consumption of software and drive it down when we can and this remotely accessible lab will make that much more easy to do so the idea here in the lab is that the lab is set up with dedicated hardware for measuring the energy consumption of software a power meter an external power meter that's just measuring all of the energy draw when using the computer and then it's then aggregated onto another computer which collects results and then you can analyze them and the idea is to set up a interface through the GitLab CI so that you can upload your code and then tell it you want to do this test it will then send the commands to the lab in KDAB Berlin and this gives you the results in a usable format and then give it back to you so that you can see the energy consumption of your software and if you're interested in eco-certifying it this would also be one of the criteria for eco-certification so we're trying to make this automated and easy and accessible to everyone this is just a bit more details about it so the the software will be installed as a flat pack bundle and then that's then run on the hardware at the lab and then this is then analyzed just another thing that we're working on is an awesome list for sustainable software this is again at the sustainable software repository there's a great list of resources related to general green coding best practices as well as how do you measure sustainable software, what tools exist, etc so check that out if you have some resources that you want to contribute to it please do so there's several talks that are taking place today, tomorrow and next week related to the sustainable software goal so one is right after this Fulker is going to present about measuring the energy consumption of software tomorrow Harold is going to present Selenium GUI testing which is relevant for all of the goals and Monday we're going to have a buff for measuring software so come by if you are interested in the process of how to measure your software there are other many related talks these are just a couple that stood out for me so on Saturday there's the flat pack and KDE from Albert as you saw flat pack is part of the remote eco lab process documentation that's also come up several times and this is maybe not obvious how it relates to sustainability maybe it is, I'm not sure if we want to achieve a sustainable circular economy for software and also hardware we need documentation for repair and use of software this is the sustainable angle on that we can't use the software long term and keep hardware and use long term if we don't know if we don't have the documentation for how to use it and repair it and etc another topic that I just thought might be interesting to think about in terms of sustainability embedded systems are not the systems that were targeted by the blue angel certification the blue angel is trying to address the issue where hardware is getting more and more powerful and software is becoming less and less efficient because of it it lets us get lazy embedded systems are the exact opposite you have a limited amount of computing power and you have to optimize to fit that and it has I'm sure many overlapping topics like the sustainability efficiency side in terms of optimization so I just thought I'd point that out and then there's another talk which is looking at incorporating green energy information from solar panels directly into KDE plasma which is all happening in the next two days so one of the things that we have on our to-do list so as part of the Selenium season of KDE project we wrote a guide which we want to add to the KDEco handbook as the next chapter we need people to test the guide to see what needs to be included or removed or what's accurate or inaccurate so if you have a chance and you're interested in checking out Selenium GUI testing maybe check out the guide and see how it works for you and give us some feedback on it another idea we've had for a while now is the KDEco badge so eco-certification is nice as third party independent of KDE but we can also do something that's internal and define certain criteria that we say this is important for KDE software and if you fulfill these criteria you get a little badge that says you're fulfilling the sustainable software goals of KDE if you're interested in working on this please be in touch we've already started taking some steps towards an eco tab that would be included in KDE software so you have the about contributors tabs and then we'd add an eco tab which would then highlight aspects of that software which are sustainable from things like eco-certification but also links to documentation source code if there are measurement data that aren't part of a certification process you can still link to it and that's relevant for sustainability issues if you're interested in working on that please be in touch if you have other ideas you're more than welcome to join we have monthly meetups every second Wednesday and then there is a matrix room and several other things which I forgot to put in the slides you can find that information at eco.kde.org it's quite easy to find and then regarding the sort of general unifying aspect one of the ideas that came up when discussing the presentations here was having maybe crossover presentations in the different groups so we have this monthly meetup and we like to invite people who are working on accessibility or working on automation to come into our meetup and maybe discuss ways that it overlaps with what we're doing in the sustainability so if you're interested in that or if the champions are interested in that maybe that's something we can talk about there's already been mentioned other ideas we can discuss in the panel and I believe that's it so thank you thank you very much and now we have quite some time for questions either about any of the specific goals or the goal process as a whole and I believe aid will be the mic runner I will run around questions I don't know if this is too detail or not but able to be answered but how hard was to get the certification did something need to be changed many changes how was that experience to get to that level so the most labor intensive aspect of it is the usage scenario scripting measurement process which if you're going to start adopting Selenium you're already taking care of a big chunk of that work and as well as accessibility and automation so we can incorporate maybe some of this that work into the general development process and then the measurement is just a matter of having the access to a suitable lab for fulfillment everything else is just documentation so the green open source software is recognized as being a more sustainable approach to digitization given various aspects of the way it allows users to have more autonomy about how the software is used which can influence energy consumption removing vendor dependencies so that you can continue to support hardware over a longer time etc etc these are all things I think we take for granted and have this is obvious but this is putting it in terms of a sustainability angle and that part of the criteria are actually just documentation just showing how you fulfill these criteria so thank you this is a hybrid conference we have questions from online and Neil Fitos is going to be the voice of online here you go how do we make sure to be able to compare over time the power consumption of our software if the material uses their changes and I see Cornelius stepped up and is answering it so I don't know if Joseph you want to add anything to that so the question was how do we make the results usable over time so that we can compare as the hardware changes what do you do to keep it up to date to keep the measurements up to date so right now in the lab we have a few different computers one of which is the recommended hardware for the blue angel certification one of the goals the moonshot goals is to have several options of hardware where you can say I want to test it on this hardware which might be a little bit older or I want to test it on more recent hardware but the answer to that is documentation of which hardware was used so that you have a maximally similar environment if you were to retest and want to compare the results directly with the tests that you did previously great thinking Joseph and then there's another one for Nate it says wasn't there a prior effort at an icon design pipeline called Icona what happened to that that's a good question Icona was a standalone app that would have definitely helped for the icon design pipeline proposal this was an app that would be used to preview an icon that had already been made in various environments against various backgrounds the idea that I was bringing up on that slide was more to aid the process of creating the icons in the first place which right now is quite an error prone process and requires understanding of the details of how inside SVG files works so I think there's definitely some work that can be done at the end to verify the end result but I also think it's important that we make the process of getting there so that there is an end result a little bit easier I have a question about the KDEco for the measurement process of the application I can imagine that you somehow need to make sure that no other software installed on the test device interferes with the actual measurement of the application that you're testing how do you approach this so right now it's just very simply turning off anything that can interfere when you're measuring we have discussed ways to have a record a state that the software was in the first measurement and then putting it back to that state for each measurement so that you have a maximally similar environment also as each measurement goes you're going to be changing the system a little bit right now the way we're dealing with that is just removing the configuration files that might have been changed and any files that may have been produced during the measurement so that it gets back to a maximally similar state so that we certainly could look at later once we get everything set up and the first step is achieved and then we can start thinking about how we can improve that process so I have one more question regarding the measurement lab like do you have a plan to support for mobile devices because like Plasma Mobile is the one of the things we support and that's where actually the energy efficiency is more useful because you don't want mobile devices battery to drain completely so do you have a plan to support mobile devices short answer no I mean so right now it's not what we're working on right now but the Blue Angel certification is extending their criteria to include mobile apps and client server systems and they might have some tooling there that could be useful for measuring mobile apps at the moment I don't know how we would do that in our lab with the setup that we have so okay thank you I have a question for Carl you mentioned that accessibility is also very important if you want public institutions to select your software do they have some requirements like you have to have a certificate maybe or do they do their own testing or is it just we tell them we made sure our software is accessible I think it depends I mean like for the web there is like the standard with CRG standard there is like multiple levels and usually institutions want level RR at least and then there is some companies who do certification for that I mean it works in the past with Nest Cloud and yeah I mean it's usually a very certification like you need to Hi I have a question to Carl how's the the status of the accessibility in the Wayland environment can you repeat how's the status of the accessibility in the Wayland environment I mean OCCA works in Wayland I can use full time we could use OCCA and it's working but I think I see like a few stuff but it would be better to improve but I'm not that familiar with that area we've sort of got an answer here so there are definitely some things that don't quite work properly accessibility wise in the Wayland session a lot of the keyboard related accessibility features like sticky keys slow keys that kind of stuff that's used to be implemented by the X server but on Wayland that's not a thing anymore so we get to implement that ourselves in Quinn and a lot of that is missing I've recently worked on some of that but it will take a few more patches to get it on parity with X there I like this setup where we get answers from the audience so maybe the panel has a question for do any of you have ideas how to measure mobile systems in a lab we got an answer that works yeah we build a custom so we hacked an Android device so that we remove the battery and we basically build a fake battery that's connected to the power and that's how we're actually using the power consumption so like it's on battery because of course most of the Android device cheat when they're on battery they remove some power features or add them we had the issue for example for VLC where we were higher priority when we were plugged on an actual Perak so that's how we did it so if you have a device then we have a RS232 to just read the timings it's a mess let's talk anyway at some point to add to that answer ARM Incorporated has these giant labs of hooked up devices with relays to reset them there's hardware setups for that other questions or answers so so I'm not directly involved but Collab or runs a lava lab to do a bunch of the tests for Chromebooks and Android so they may have some there's definitely some ways to use lava to do that kind of thing let's talk so next up we want so again from the internet regarding in code certification is there an ongoing measurement that is taken each time the software is released to ensure there are no regressions or minimal regressions in power usage over time let me just make sure I understood so is there an ongoing measurement process so that we can then see if it's improving or getting the way I understood it after you get the certificate is there something ongoing for the certification you need to measure it regularly the exact details of that are a bit in a gray area but yes you're supposed to measure it regularly and right now our approach is that we want to measure major releases but first we want to get the lab set up for easy accessibility so once we have that then the idea would be then for major releases to have regular measurements and that's required for the eco-certification as well thanks and maybe sorry just one more thing to add to that the eco-certification requires that it doesn't increase consumption doesn't increase by over 10% during the time of certification from when it was certified so there are requirements on staying within a limit so that means we could lose the certification at some point so we need to be careful can't get more than 10% worse so we got 10% worseness to play with make the world a better place with your question it's just a small follow-up question like do we need to redo the entire certification process in case we lose it just to go under the threshold of 10% and get the certification again good question I don't know hopefully we don't find out but you're motivating me to find out so I'll see if I can find that information so my question would be do you do tests on a certain device where they're using plasma extends the life cycle of for example in an organization are there some tests to show how much e-waste can you not have if you use plasma then if you use Apple devices or Windows devices no the requirements of the certification are to state the minimum system and demonstrate it can run in hardware that's at least 5 years old which I think is way too low one of the things that I have in mind that I would like to work on at some point is a campaign on e-waste reduction with free software and I think gathering data about hardware that is no longer supported by the two major vendors and can still run free software and getting some documentation of that would give us at least some idea of which devices if not the numbers of how many which devices would otherwise end up in the landfill but can remain in use because of free software so if I could add something to that answer something a little bit more on the concrete side in the town where my parents live my mother is involved in a community organization that is actually doing exactly that right now there are 80 desktop computers that are over 15 years old that would have become e-waste recently and are instead being loaded with Kubuntu and used to teach computer classes and given away to low income students so there's a specific example of how using eco-friendly software can directly prevent e-waste mine was instead to jump again on the accessibility topic I just remembered another partial answer to how is the accessibility status on Wayland another thing specifically to Orca the actual screen reader part just works for the most part one thing of Orca that doesn't work at the moment I think unless was fixed recently but I don't think it has a feature that can do something like putting a rectangle over the button that is now reading that at the moment cannot work on Wayland for the absolute positioning stuff and there isn't a clear solution yet I guess the final solution will involve as usual a new protocol as every Wayland problem but that will need to be thinking about the same issue with as their sister which is like the software where you can analyze the state of the accessibility of the program and you can actually show the rectangle and that doesn't work on Wayland thank you we've got time for one more question you've already asked one you're new I think we even can do two we can do two so this one's kind of for all of them although it sounds like the eco guys are already doing it as an app developer is there plans to create like checklists of stuff that I can be doing to help to meet the accessibility goal to meet the automation goal sounds like the eco guys are doing it with the handbook already that's also the idea of the badge which is something we really should consider doing if you fulfill these things you reach recognize within the community as reaching a certain sustainability goal but not yet that's definitely something we should do I think it's a good idea too all of the goals can probably benefit from this there's a lot of overlap if you do the Selenium GUI testing you're basically doing all three at once from the automation perspective there's the obvious stuff like have a continuous integration system do testing so you're passing things like that I mean for accessibility it's the same thing like it's a good idea to have like a check list yeah and the very last question thank you I heard you mentioned before that you have an energy budget of 10% you could lose the certification if you covered you're not sure is it possible that this measurement and this process to certify this is it scalable to be automated so that when you merge or when you do a new release it's like part of GitLab and kind of how scalable it is for it to be part of the release process both of Ocular and other plasma applications so how can we integrate it into the development process so that's the idea of this CI runner that we can have then if we want to measure you can then simply make your merge and then test in the lab that's sort of the goal with that so do you think that it's something that is going to I'm going to merge something and is it going to I'm going to wait like five hours for to get the answer like how's the process but we can probably talk about that I mean this is something that maybe actually other people who are involved have ideas but we can certainly design it to fit our needs if you want to do eco-certification you need a certain number of measurement runs to have a statistically relevant measurement but if it's just for a quick thumbs up thumbs down kind of measurement I'm sure there are ways to shorten that process then thank you very much if you have more questions during lunch these guys will be available to you thank you so much