 and I think for our part of preserver and you're making, or you're helping make it happen that way. Both Goodfing and Qt, they will have boots because they are top tier. Well, reach out and ask what they do. I'm sure that they will appreciate it. Also, thank you very much to the University of Macedonia for hosting us here. This also makes it happen in a very, very logistical and practical matter. Thank you very much, University. On a more pragmatic note, we're gonna have a coffee breaks. They're for free. They're between 10.50 and 11.15. It's on your badge. You will see, well, join, bring the coffee and talk to the people like I was mentioning before. There's gonna be lunch. Lunch, you had to register for it though. If you forgot to register for some reason, well, go find the people in the registration booth and tell them, or you can find lunch anywhere in the city because here there's plenty of food. Over here, it's all great, so that's also fine too. There's places that you can order water. I was asked to say, don't drink from the toilet or the bathrooms. Yeah. Sure. Well, as the registration and the Academy team, they will tell us all about the water. We have a great code of conduct that you must abide for. You have the contacts on your badge in case that you have a problem which you shouldn't have, but, well, no, it's there for you in case you might. Remember to wear your badges, unlike I'm doing myself right now. Also, it's gonna be needed for the social events. If you don't want pictures to be taken off you, use the red lanyard. Note that there's colors also respect other people's lanyards if somebody doesn't want pictures taken off them. Well, don't take pictures off them, right? That's this hard. We have hashtag, hashtag Academy 2023. If you do things on social media, use it and that will help, well, the world will notice what we're doing. So that's kind of what I wanted to say to get started. I'll leave you with the keynote very soon, if that's the time. They'll tell us all about space and preserver. Thank you. And thanks for joining. So maybe we can do a Academy t-shirt in them for you. So I'm wearing my last year's Academy t-shirt, it's 2022 and I can see one there, Albert, good to see you here, Marco. We have a 2021 Academy t-shirts in the room. So we've got one, two, three, all right. 2021 is presents. 2020 had that amazing color scheme, yes. Excellent. Two was Milan, wasn't it? Was great. We have the 2019s in the room. That was Alalia with the, you're right. That was an amazing t-shirt. Any pink shirts? So before that was Alalia with the orange, not bringing your old Academy t-shirts. What's going on here? Okay, that was 2012, oh, there's a 50, yeah, Berlin. Okay, we got a ton of that. And everyone else who's wearing safe shirts at home, we've got it there. Yeah. And we've got one old guy up here. And now, the space is easy. It's not. So, hi, I'm Alferios Cosmas. You can call me Alcos, it's usually hard to pronounce my name. So, I'm representing Libra Space Foundation, I'm its vice chairman. And I'm gonna talk about how open source actually empowers getting the space, the Libra way. So, before starting, I just have to say that I'm really, really happy to be here. I've been a KD user for the last 23 years or so. So, yeah, since KD2 came out. So, I'm really happy to be with you. I've never actually expected to be here with you. So, how all this started? The thing is that, as most interesting stuff, starts with a small group of people, friends, whatever you like, that met the local hacker space in Athens, Greece, which is focused on open source technologies. That started around 2011. At 2014, during the Space and Challenge hackathon, we started the satellite ground station network. Same year, we won the Hackaday prize. Approximately 200K thousands. So, we had to decide what to do with all that money. It was a lot during the worst economic crisis in our country. So, we say, hey, let's make a nonprofit foundation. So, use all that resources in something we love and share back with the community. So, 2017, we actually had in orbit our first satellite, UPiSAT, which we did for the University of Patras. And 2022, our first open source satellite deployer. That's the second one. So, all these, this work is powered by vision, by open source and openness in general, open governance, open hardware, open data. And what we seek to do is to make space accessible to all. To do so, we develop stuff from ground stations to satellite missions, and we adhere to a set of values. We have our own manifesto, if you'd like. Mostly inspired by the Mozilla manifesto and the Outer Space Treaty. And the whole idea is that we use ZPL version six, say, ZPL, LZPL, CERN licenses, open source data, and we adhere to trying to document everything, all our processes, all our governance, the way we do stuff on our documentation efforts and our representatives. For an organization that works in space, that's a little bit weird. Most such organizations, especially the ones not funded by governments, don't do so because most of them are, I hope, for profit companies or they are just commonly not very open. You see, what we do, actually, being an open source organization allowed us to create some interesting projects like satnax, like UPSAT, you can actually see, in orbit, it was launched by the International Space Station, so you get a nice picture from a astronaut. And we've finally tried to do a lot of hardware development, mostly, nowadays, and a lot of software development that will create more open source data for space. Why do we do so? And why do we believe that's the way forward? Well, first and foremost, open source allows for rapid prototyping and rapid iterations. For example, the satnax communication system. The little module we created, it's on over there, which is designed for UPSAT analysis, signal analysis and communications. It's actually also forked by the good people from the Aristotelian University of Soloniki next door from here, and they use it, they forked it. They managed to use our design on their own future UPSAT and that's really interesting, because that's not the norm in space. We've seen that this design can also be used for ionospheric disturbance analysis, which for us, it's interesting because we didn't thought about it. Some other people thought about it, and because it's open source, we can actually see how it works. They can say, you know something, you can also do that with that hardware. We never thought about that. It also allows for global collaboration, meaning that the satnax network now has more than 400 ground stations in 50 countries all over the world. You can actually get data from Taiwan, from Russia, from the United States of America, from the Azora Islands in the middle of the freaking ocean, which is really interesting, in my humble opinion, because that's something usually only large superpowers could do a few years ago. Nowadays, with open source and a huge community, you can actually do stuff and lower the barriers of entry because, for example, for a team or a person to participate in satnax, you can actually need Raspberry Pi, pure antennas, a simple RTOSDR, and you're good to go. I mean, I have one of these things on my rooftop, and it's simple, and it's a way to give back to the community and a way to participate in a space project, and also it's a way to participate in projects that you have really interesting. So what we must understand is that using open source in space is not just developing open source. We prefer to dog food open source, if you like, meaning that you have to use open source solutions all over your stock. You have to use open source solutions for daily operations, for community operations, for development needs. It's not only something, I'm gonna create an open source community and use everything closer. That doesn't go a long way. You use, sometimes you have to actually assist the development and actually lead the way to get the features you need, and have daily interaction with open source project use in order to get some of their good practices and good ideas, and I'm not meaning only about code, I'm not meaning about, ah, yes, they did a really nice thing in that community, so I'm gonna get that. There are good practices and good community management ideas or good implementations of things, among several projects, and having interactions with other projects, I can't read. You can actually get what other people are doing try to copy the right stuff or the stuff that suits you. And let's be honest, using open source solutions can allow for interoperability within the organization, especially an organization that's mostly remote and distributed around the world, and with external entities. For example, we are working with the European Space Agency, we are working with Harvard, we are working with several spaces across the world. Having open source tools allows us for better communication with our partners. And there are some of the open source projects we use, of course we use the Linux, apparently, but the thing is that we try to use as much as possible across the board, but there are challenges. And being an open source organization in space creates some difficulties, maybe, because let's face it, space is hard, it can be really hard. Orbital hardware has a few ways to be fixed. As it's long, you can't do anything about it. You can't go say, oh, you know something, that screw will soon go there. Well, but like, but open source can provide a solution for more robust systems. There are several regulations, several standards you have to follow, but following standards. There are some ways, so you can actually create framework on them, you can work with the standard and actually start to implement ways, to find ways to implement the standards as long as they are not locked in the proprietary way. We are seeing these coming, diamond time again, there are regulatory issues, and what we believe is that as long as you try to be open and try to implement stuff openly, you will provide solutions for all, and that's key. To be honest, space used to be the domain of superpowers, of large corporations, of people with resources. Miniaturization allowed lowering the bars of entry. Lowering the bars of entry, open source in your hands can be a catalyst in lowering the bars of entry. Now you can build your own CubeSat in your local Hikerspace, and you can actually go in Athens and see people building satellites in the local Hikerspace, it's a little bit crazy. Because you can actually see people working on a space-rated hardware, while other people are hacking something really silly, which is really nice in my opinion. The industry, this industry at least, it's not the only one, is full proprietary solutions and secrecy. Secrecy has been a staple for space, many, many years now. And nowadays we see that the impact to innovation is so huge that traditional entities, private public entities, are keen on checking out open space, they've not yet gone full open source, not all of them at least, but you can see that the European Space Agency, that's to promote open source, NASA has open source repositories, there are processes, there's work to be done, and we're trying to pressure things going forward in the open source way. But let's face it, space is also a very capital intensive industry, it's a huge amount of money to put hardware in order, ridiculous amount sometimes. So that impacts innovation and impacts open source too, because we are used to open source creators, to create something and have it on our hands, able to use it without any intermediates, without having to invest huge amounts of money to have it even work on its own. This is not the case in space, but the case is that because it's so capital intensive, and there are opportunities to actually be able to fund open source development, we are actually an organization that's sustainable, has around 15 to 20 employees, and it's very, very, 15 to 20 employees, most of them are engineers, I think I'm the less engineering one, and the thing is that what we see is that we are able to sustain our organization for the last eight and last years. We believe that certain, the same thing, state is KD, isn't indicative of the common grounds open source organizations like KD, Server's Liberal Space Foundation, both foster communities, both try to democratize, if you like, access technology, and they're thriving through community. We couldn't build stuff without our community. We couldn't build stuff without open source tools, like kitlab, like a big, blue button. We couldn't be the way we are without being collaborative and transparent in what we do. What you're trying to achieve, what you're trying to build, is similar to an extent, in what any open source organization does, being open, promoting open source culture, openness in general, and creating a set of open tools for people to use. Of course there are differences, because let's face it, as I said before, and I'll say it again, and I'll reiterate it continuously, space is hard. There are domain-specific challenges. There are tests you have to do, there are regulations you have to fill out, after, nice. And while working with that kind of environment, you may feel a little bit changed in how things work. You may think that I can't do the things I would like to do because I want to break some stuff. You can't, it's space that won't let you break stuff in orbit, and in general, they don't let you break stuff around in space, in space agencies, I don't know why. Most of our work is purpose-driven. That's not really similar to what KD does, because KD, actually, in my humble opinion, please, correct me if I'm wrong, KD actually provides a generic computing experience. You can use KD for whatever you'd like. That's a strong selling point of KD. I'm a KD user, but my, use scenarios were different across the many years I've used it. I used to use KD to just browse. I used KD to develop stuff, and it wasn't nice for anyone involved. And now I use KD for developing open-source hardware in space. Well, the thing is that what KD and projects like KD do is provide a very important common ground for us, a very important infrastructure to work with. We can actually use stuff around. And especially, the way we see it is that the way I'm seeing it, at least, is that KD is flexible enough to adjust to my special user needs as they change. This is really important in the way I see it. As I was saying, though, we need to engage the community in different ways than KD, meaning that as we work with a very specialized bunch of people, scientists, space engineers, space agencies, they need that engagement in a different way of things. You need more formal engagement sometimes, you need to be a little bit more in the box, if you like. But that thing also changes in the space industry, because the space industry has learned that you can innovate always in the box. The other thing with liberal space is that our deadlines are very hard, because honestly, you can't say to the launch provider, you know something, guys? Can we postpone launching that big rocket of yours because my hardware is not ready yet? Can we? No, I can't say that. I can't say I haven't passed the QA measures. They needed for their hardware, their software, because this has to be done. I can't say I didn't do environmental testing because I have to do it. I have to vibrate the thing before it goes on a rocket. And the thing is that, in contrast with HIDI, we do have a hardware committee. We do develop hardware. Developing hardware can be, well, funky hard, but also creates a focus on a singular platform. This isn't the case for KD, because KD has to be used on any of these things, or hopefully in the future, it could be used on our phones, or in any device we use. This is different, but also really interesting in the way such a community must interact. KD has to work with a million devices and a million of systems doing different user scenarios which can be really complicated or really extreme. Yes, there are 10 users out there that use this weird driver, and you have to figure out a way to use that on your system and create a UI around it and have an experience similar to other KD users' experience. For me, it's crazy. I don't know how you people do it. So, me. But also, it's something that, it's out of our world. We can't believe that it's done, but it's done. We've seen it. We've seen it on our own machines, to be honest. And the other difference is that when you people work and develop KDE, developing KDE is more or less instant. You do stuff, you create your binary, and you test it. I do stuff, I sweep them away, I beg the gods or whatever to not skip the launch day because, you know, the weather is bad. Oh, poor you. Next month. And the other thing is that you stay in the unknown. You don't know how stuff will happen. Sometimes, without your own, I don't know, input, without your own, without you messing around stuff, you have that. That wasn't pretty. I mean, that was a community, a bunch of people. Some people here, I can see, that have seen and worked on the code of this thing. There used to be Picobas, a particular launcher there, and two really small satellites of Libra Space Foundation, and four Spanish satellites on the Picobas because we decided to assist our friends in Spain. So we got the thing in the launch pod. It started flying and kaboom. After one minute of showing the flight, rockets started wobbling and they pressed the button. People worked on this. Worked hard and this thing worked and didn't. And that's a challenge faced by such organizations that are focused in Spain. You might work a lot and things will fail. And failure is an option. Well, if they didn't press the button, it would be a worst failure, so I understand that. So, what we learned so far on our journey in space in the last eight years, no, longer than a soundtrack set of seasons. So, first and foremost, we learned that sky is not the limit for open source. Open source can be everywhere. It can be in orbit. It can be in Mars as APL showed on their little helicopter. Challenges can foster innovation. The challenges and the regulations and all that regulatory framework we have to work might allow you to change your mind, to make it more flexible than others and try to implement stuff in a better way. The way we see it, I think one of the most important things we have taken into account is community participation. We couldn't be here. We couldn't build a seismic schedule which is the biggest satellite ground station globally. Without a community, without people building their own ground stations, without people suggesting, ah, you know something, guys. You have to do that because that would be more efficient for that driver over there. I mean, come on. I couldn't have thought that. And it also allows for input in domains that you are not used to work with. I am not a space engineer. I'm trained in healthcare. I'm a nurse, actually. So, I used to be a nurse. And having people with domain-specific knowledge or even more. No domain-specific knowledge, but knowledge in certain characteristics of physics or stuff like that can allow for innovative solutions and can allow for a community to go further. In my humble opinion, ah, that collaboration sounds like a collaboration. Collaboration can have a geometric impact, of course. Meaning that you can actually see that it's not linear. After, ah, we've seen that when there's an amount of people in the project that starts getting a little bit more crowded, has more committees from other people outside the regular, things change and go faster and faster. Sometimes it's difficult to keep up, but they're faster and faster and you see that the project started going slow, mostly working, goes crazy fast because the community is pushing the project forward without you even expecting it to go as fast as it goes. Ah, and we actually learn that we have to learn from our peers, learn from other open-source organizations, learn from other space organizations, get the knowledge, an experience and wisdom, if you like, of the open-source community. We've seen it with other organizations too. I think we've seen it with KD. You people have experienced stuff we haven't. You've seen how difficult it is to migrate to a new environment. We've lived that with KD3 to 4 and we survived it. But the idea is that you learn and the right things your peers do and the wrong things your peers do are actually a way to learn stuff. And we believe that an open-source organization like ours or like you has to learn from the mistakes and the right choices of their peers because we have a great thing meaning that what you do and what Mozilla does and what Debby and I is actually a learning process for all of us. You can't implement the same things always because we are not the same organization, certainly, but you can actually learn stuff from each other. And you can also collaborate either directly or through umbrella organizations going forward. We have, in my humble opinion, and I think that's shared with many people in the KD community, and other open-source communities, we have to collaborate on common challenges. We have to collaborate on challenges such as policy coming in our countries in the United States of America and have to learn to figure out how to protect our users and protect open-source. I'm open for your questions. Really would like to open a dialogue and start chatting because, okay, doing a presentation is cool but chatting is better. Of course, you can always send me an email at Delcos at LibreSpace to just say hello or chat and discuss whatever issues you may have. But I'm open for your questions, if you like. Certainly. Let me see. Yes, that one. So, it has the Kaikai logo on it and the question is how much of that is going on? How much do you have to design? How much hardware do you have to design and make yourself? Or can most of the hardware that goes into the satellite come off the shelf? Well, here's the thing. We don't actually design ICs yet, I would say. There's always interest and if my engineers would have their own thing, I'm pretty sure they would design their own ICs instead of the kind of stuff. But bear that, all the design is custom. Excluding components like that SNA thing or that screw over there or, you know, resistance and stuff like that. The design, the PCBs, the metallic designs, everything is custom and it's open source. Designed with KitKat and FreeKat on an open source repository you can actually download and build it yourself, put it on your own rocket and call it a day. It's a really interesting project. I... Yeah, that's a... That's a really interesting one actually. That's SIDLOC. SIDLOC is a satellite identification and location protocol. We designed this for the European Space Agency and the thing is that it actually emits a beacon that allows for... It's not a satellite, it's spacecraft action, spacecraft identification and location. So why this thing is interesting? Well, it emits a beacon, a certain kind of signal which allows for identifying a satellite or a spacecraft easily and allows it to be located easily using Mach machine. So the thing is... The interesting thing is that having an open source protocol is cool. Having a protocol that is actually implemented in an open source way is way cooler and that's the way we should work because if we went out there and built a proprietary solution which could... What's the meaning? Why? Why should anybody adopt such a thing? It's stupid. It's five by five centimetres. It's huge. And this design, this actual design will go in Ariane 6. It's a inaugural flight and will actually allow us to track Ariane 6 using satellite. And yes, you have to use kick-out on that and yes, you have to use free-cut on that and you have to do everything open source if you are willing to do something that's impactful for the space industry. If you are just trying to make a buck, then, yeah, go full proprietary. I don't care. I'm trying to create something that will have an impact to people, to universities, to universities, to even corporations. Someone trying to build their own satellite because they're making something. Yeah, you have to go fully open and let them do their commits and let them do their forks and go on. Other questions? Yes, please. You mentioned ground stations a couple of times. What is a ground station and is that something that we, as people living on Earth, would want to set up in our backyards and what does it do that will help people? Yes. So a ground station can be... That's a little bit over there. Something like that. I have to admit that there is also a bigger ground station in the network. The Dwingelio Telescope in the Netherlands is actually a southern ground station which is a little big for my backyard. I don't have such a big backyard. It's a little bit bigger than my house, actually. So the thing is that people create southern ground stations or actually make their own or use all their type of hardware used by radio amateurs to build one because they like to share data with their community. Some people find that interesting because it can actually be an easy weekend project for them. It wasn't easy for me, but I'm not good with black and white characters at all. So... But the thing is that people like to assist other communities and other university teams. Most satellites... A ground station actually receives data from a satellite with orbits, lower orbit. So it only has around five, ten minutes per day to actually track the satellite. So if there is a university team like the university team at next door that needs data they can use my own ground station they can use my friend's ground station in the United States or they can use a ground station in China in Taiwan. They have a book in Connecticut or in Russia. And actually do stuff. So in that case they mostly like to assist other communities. So... Regarding HRAD, as far as I know most of the stuff that is designed to go to space have to be subjected to very rigorous testing. And the testing equipment to do that is very expensive and also proprietary. Under even the specifications most of the times from my experience even the specifications like dimensions of a vacuum chamber for example are under NDA2. Do you have any solutions you are doing yourself in order to reach the same standard that is required? That's a great... almost insider... you... So... He is literally from the university next door. I know... I know him, okay? I know people. So... Yes, we are trying to have our own designs. We are trying to build our own vacuum chamber. We are trying to build our own clean rooms. And we want this all to be open source documented. Right now what we do is go to the University of Catalonia and do stuff. There. And we pay stuff to do stuff there. We have to pay stuff to the environmental testing. We have to pay stuff to do vibration checks. And in the future what we will be able to have our own stack and the stack available to open source projects in general. And the infrastructure, the physical infrastructure that is an open source project can send their own a hardware to be tested. Hopefully with the person together to monitor the procedure because I am not... Yeah. Other questions? So I have another similar question I suppose of the collaboration with different projects and being open source is amazing and kind of have you been able to have any success with getting some of these proprietary companies to open source their stuff so kind of like coming from the other direction? No. This is it. Especially in space. Space, as I said before is a very, very complicated thing I think. And people tend to be very protective of their version of things. Usually not something very interesting. Sometimes you've seen devices in hardware that are used around and sell them by big companies. They sell so much money for it, right? Why? It's so simple. On the other hand yes, but that has orbital pedigree as I said because it's been there for so many times whatever hell we want for that. But the thing is that these companies most of these companies see that there might be another way to do this. Interestingly the space agencies say that there is another way to do stuff because the space agencies want to protect their own interests and their interests of what people there represent. So maybe the space agencies say yes, I need that open sense. And usually that's where we come and say okay let's work together with the community let's work together with large integrators maybe even I won't say names yet because I haven't found them yet. But so start to see people start looking about open source and space. They see that it might make sense it's not a good feat for our business plan but yeah maybe some other time we will see that he's changed. The guys that are at next door they've seen it already they've seen it they already used some of our hardware right? Then they already thought it and they already put it on their own way and they see that they might even go forward in making their own open stuff because they are cool but how many people are there? I've seen it all recently I think it was quite bizarre a tube set from the quad or I think I'm not sure that recently got released as an open source project and it's truck which I've actually seen a really nice graphic display of working in a space it's really cool and the thing I see is that we might see change from university things first pushing the envelope forward making open source solutions because they need them to work better and because let's be honest about it the guys next door might not always be next door they might go to the industry they might go somewhere else they might change career and become artists but the thing is that the source code has to remain there they're a public funded university they need to use their creation for the betterment of everyone else all the questions thank you we're sort of out of time so I'm going to use my session chair position to ask the last question oh you've gone to space with open source where are you going to go next wow still we are going to space but the thing is that we only reached lower orbit right of space beyond low earth orbit thank you for everyone who is not going to space today you may want to go to the coffee break which is right now I'd like to point out that your printed badges contain a schedule and that schedule is wrong please use the website polls presented moderated by Lydia and talked about by Carl Nate and Joseph before we get on with that I'd like to remind you there is a program in your badge the program is wrong use the website but your badge is useful for other things it tells us the people around you who you are and if you look on the inside it will also tell you things like bus schedules and the wifi password so examine your badge it's full of good stuff and an incorrect program that said I would like to hand off to Lydia who is going to run the KDE goals panel thingy thingy take it away has become to look at what has been happening around the goals that we voted on thank you and that's what we are going to do now and we are going to start with Nate to talk about the automation and systemization goal so please welcome Nate hello everybody welcome to the panel so today I'm going to talk about the automation and systemization goal this is a goal that was chosen last year so this is going to be kind of a mid cycle update so let's get into it first let's start with a little bit of background about KDE and why the structure of KDE makes this goal important all of us in KDE are aware that KDE is a multi-generational organization and KDE's contributors have a defined life cycle as well as different stages that we often move through at the beginning we have people who start out as students or young hackers in university these people in this group are marked by having lots of time, lots of passion lots and lots of contributions to KDE then time marches on as it always does people become young single professionals and their KDE time becomes hobby time at this point in time people have jobs and jobs take up more time and as a result KDE contributions tend to fall off a little bit then time continues its vicious cycle and people become professionals established in their careers they have families and KDE time often drops off quite a bit and then finally if all goes well people retire they become financially successful they have time for KDE again but one thing you can see with all of these different groups that people can fall into is that the amount of time that we have for KDE changes that means there's a lot of turnover in KDE one thing we're really good at is making sure that people cycle in and out and learn from each other but it's really important that we keep the knowledge that people bring to KDE and it must stay in KDE people bring knowledge all the time they work on cool stuff some of that knowledge gets passed on to other people and then we learn from each other I think all of us have had that experience some of that becomes embedded in technical processes that we work on and that we contribute to and unfortunately some of that knowledge just gets lost when people leave KDE and that's the thing that we really want to try to avoid and that's the thrust of this goal is how can we minimize that knowledge leakage when people inevitably leave KDE either temporarily because they've moved on to a new phase of their life or permanently for whatever reason which is fine because people come and go but we want to make sure that their knowledge stays within KDE there are many different types of knowledge that gets lost so let's go over that a little bit the first type of leaky knowledge is processes that are done by hand and generally not documented these are things we really want to avoid the next is when people have personal tools that they write for themselves to take care of certain things but then they don't share them publicly when those people go away the personal tools are essentially lost as well next we have public tools which is better public tools are better than private tools public tools have to be documented or else nobody knows how to use them when people go away and then they get rewritten because it's easier to rewrite than to understand and also these public tools sometimes are not run automatically for automatic periodic processes and that's important too to make sure that people retain knowledge of how to use them finally we have, not finally second to finally we have knowledge that is gained alone we learn something but we don't talk to other people about it talking to other people is really important and documentation also has to be kept up to date when it's not kept up to date it's not useful all of this leads to the very familiar feeling of if I stop doing this it won't get done and then you feel like you have to keep doing it and if you don't everything you're working on will end up as an agent ruin like this so that's not a good thing we want to avoid that feeling the basic problem here is that working alone sucks because you end up doing the same work that other people have done before you end up fixing the bugs that other people have fixed in the past you end up getting your merge request nitpicked to death with style comments because people have different opinions on what should or shouldn't be there you end up talking to users about the exact same problems that you fixed over and over and over and over again you end up triaging the same bugs over and over again and eventually when you decide to go on vacation nobody takes over what you were working on and so it eventually gets dropped on the floor these are things that are very unpleasant they tend to lead to people burning out leaving KDE not having an enjoyable time and we want for those things not to happen you don't turn into this guy and destroy your computer because then you really can't work on KDE stuff so the solution is to externalize your knowledge to get it out of your own head it's really important for the thrust of this goal that we be scripting our tasks and that we have those scripts especially if they're for periodic tasks that they get run automatically rather than manually they need to be documented on them too so that if one person is gone then another person can take over without missing a beat we want to make sure that if people are doing similar tasks that we consolidate the tooling that they're using so that each person isn't having say personal scripts that they're running that only works for them and then it only works for their process and another person does a different thing not great we need to collaborate on that we need to make sure that we're keeping up to date on test cases because we want to make sure that our code is well commented we want the comments to be good we want the comments to explain the why not the what the what is usually pretty obvious the why often is not obvious get history I will mention real fast is not a substitute for code comments because you don't see the get history at the moment when you are reading the code you don't see the get history you don't see the get history when you are reading the code depends on your client right depends on your tools so if you're using Kate which all of you should then Kate has an amazing plugin that can show you the very last comment the very last commit that touched a particular line but all of that is a more indirect process than just seeing a comment right there in the code that explains what's going on it can even reference the get history but it is not a substitute because otherwise some well-meaning do-gooder who does not have an editor set up the way yours is will look at this code and they'll say why did somebody write it this way I'll just rewrite it and then a whole rabbit hole gets gone down and it wastes everybody's time we should also be making sure that stylistic stuff is done with auto tests and with CI so that we're not endlessly arguing over whether there should be a lot of code and we should also be making sure that our documentation remains up to date because we're actually using it that's the best way to make sure it happens if we're not reading our documentation we won't notice problems and we won't be able to use it and we'll be able to use it and we'll be able to use it and we'll be able to use it and we'll be able to use it and we won't notice problems and we won't see that it needs to be fixed so when it comes to what's happened over the last year a lot of really cool things have gotten done we've had 12 months that's pretty good I want to go over real fast some of the things that we've managed to accomplish we've added tons more auto tests everywhere personally I'm involved in the plasma project I've seen a lot there there are a number of apps where people have added auto tests this is a really useful thing we've gotten a whole new testing framework using a system called Selenium that allows us to do user interface testing Selenium is really cool because it also goes through the accessibility API so in order to make it work in the first place you have to have adequate accessibility support in your software so this gets to two goals at once because at the same time that you improve accessibility you make your software more testable you're making sure that the accessibility code is being exercised and that it doesn't fit right over time because if it does, you'll notice it and that's really great we have for the KDESRC build script that many of us use for compiling software we now have a dependency regeneration tool that automatically makes sure that the dependencies for HKE repo are up to date, that's really great we have tooling for updating apps on the Microsoft Store which has been written last year, that's some really amazing stuff we have an increasingly large set of changes to make tests mandatory to pass on your software so that you can't press the merge button if the tests are failing, this is excellent we're not 100% there yet but anything is better than the 0% we had a couple of years ago we have a bugzilla bot now and the bugzilla bot takes care of various bug triage tasks, simple things at the moment, saying things like a software that's too old report it to your distro things like that we have updated a ton of updated documentation over time to make it useful so that people can actually start using it we also have continuous integration jobs to build flat pack bundles for many apps which makes them easier to test and also allows us to see when that process breaks, so now it doesn't break as often which is great we have CI jobs to enforce code formatting in C++ in some projects this is something that we have in some repos and not all repos, but it's really cool stuff and it has saved us from a ton of arguing over code style and merge requests which saves everybody's time for more useful things we also have a CI job to validate JSON files now that everything has been ported to JSON which is used to auto generate desktop files so now you don't have the experience of accidentally breaking your JSON file right after you hit the merge button which is no fun and waste everybody's time so you can see a theme here which is let's not waste everybody's time finally we also have a hook script to prevent you from changing translated text in Git now that translations live in Git repos we were seeing a bunch of people saying ooh this is great, this means I can now do translation work in Git, can't do it so now we have a thing that tells you so that we don't have people explaining that over and over again which was no fun there are many ways if any of this sounds interesting that you folks can help to do this the biggest ones is to write more selenium UI tests for apps that have the framework set up this directly benefits multiple goals as I mentioned earlier it also helps to test the accessibility stuff and if your app does not currently have selenium set up set it up, it's really cool there is a wiki page that explains how to do that some of this is linked to in the goals wiki page which I'll get to at the very end another thing you can do is a very in some cases it's sort of a low-hanging fruit is to make tests mandatory to pass before merging if your project is in the fortunate state that all of its auto tests already pass fantastic immediately go turn on the thing that says that they have to keep passing so then they won't just start failing at some point in the future because that will happen we also want to make sure that we are adopting much more widely the code formatting stuff we don't have as much arguing and merge requests about that, that's another small thing that is relatively easy to do from a technical standpoint from a sociological standpoint it's harder but you can say, Nate told you to just do it and then people yell at me and not you and that'll be easier we have this KDESRC depregeneration script that I mentioned earlier that's something that could be automated so that it runs periodically and then Nicholas over there doesn't need to manually run it once a week which I assume is a waste of his time we also have the bugzilla bot the bugzilla bot is written in Ruby it's rather approachable we can make this smarter so that it's doing more of our work for us and we have less bug triage to do but basically be lazier I want all of you to go out and be much lazier so you have less busy work to do because in the process of being lazier you're also helping to take the amazing knowledge that's in all of your heads and put it into KDE where it can benefit from everybody because all of that won't be lost there are also some even bigger ideas that I've got for this goal that have been worked on a little bit here and there but really need the helping hand of technical experts to make this stuff possible basically everybody I look at in this room is much smarter than me so I'm looking at a whole room full of technical experts I think that any of you folks if you want to work on any of these things this would be a fantastic way to help the goal we've got tasks like consolidating release tooling I think it's not lost on everybody that we have many different release vehicles we have gear, we have frameworks, we have plasma we have individual things with extra gear if there would be a way for us to consolidate our release tooling so that it's something that can be run automatically and it's something that can work for all different release vehicles that would be fantastic because it would make it much easier for people to be release managers and not so much of the work would have to be born on one particular person so that that person feels stressed out if they happen to be on vacation when release day arrives etc there's also this other moonshot idea of using AI to triage bug reports people keep asking can you integrate chatGPT and KDE can you integrate chatGPT and KDE and this is how we do it in my opinion we have a robot triage bug reports because this is something that robots can potentially be good at if they like to do so let's do that if possible this is something the next thing is something that also helps for onboarding if we could have KDE SRC build automatically install third-party projects needed to build KDE stuff instead of making people go and do that themselves that would be a huge help that would cut down on an enormous amount of common chatter that people end up having to handle in the same vein so if we could integrate the AI to triage bug reports we could have a chatbot answer common help questions things like NVIDIA drivers and why does this update not work in Discover because my distros update policy is completely broken things like that I'm sure all of us are very tired of answering these types of questions I know I am and if there's any way we can have a system do that that would be much better there are also ideas for how we can make our icon design pipeline much easier it relies on information that is stuck in the heads of several people some of whom I see in this room and I think if there's a way that we could make this a more programmatic process by say having icons get generated by combining symbols together in a code pipeline rather than making everything manually get done in Inkscape that would be fantastic we could automatically generate AppStream release notes from the commit message tags and from GitLab tags we have all the plumbing needed to make this work we even have support in AppStream itself which has recently gained support for fetching remote release notes which is the big blocker last time it's just a matter of wiring it up then we don't have to spend so much time manually writing release notes for every single release this could be a big benefit I think and then there's also that this week in KDE blog that some guy named Nate writes which for some strange reason is not on KDE infrastructure yet so maybe he could finally get off his butt and do something like that I think that would be good for this goal as well if any of this sounds interesting you can get involved in a couple ways I'm the goal champion you can always contact me I'm Nate at KDE.org I'm around all the time I think you probably know how to contact me otherwise there is this KDE.org slash Goals link where you can find information about all the goals including the automation goal we also have a team on Sprint that you can join there's not much activity there in that particular space right now so you can help change that we have a buff session at noon on Tuesday that you can go to if this sounds like an interesting topic and we also have a Sprint planned for some time next year so keep your eyes and ears peeled for that and with that I would like to say thank you all for listening and we can move on to the next one I'm Nate and we move on to accessibility with Carl tell us about accessibility Carl Hello everyone I will talk about the second goals about accessibility as we are called KDE for all because we are in community we want to have software to work for everyone even people with disabilities who need to use the screen reader or who can't use the mouse as this code from Tim Bernali who says the power of the web is the universality and accessibility is an essential aspect and I think that also applies for KDE like why is that important it's good for us to reach more people we do that as well by putting our software to windows or to make or to hungry we don't want to only target Linux users but we also should not only target people who are experts with computers and we can't see the software and it also benefits everyone like even for normal users being able to use the software with the keyboards like for private users it's quite important like they can be faster to use the software like for example another aspect is changing the fonts like to increase the size of the fonts as well like useful for everyone and not only people with a reduced visions I mean like accessibility as well including the usability generally like of the software in general and another aspect is that accessibility is a requirement for public sector organizations and if you want to be able to use the software we should try to ensure what case software follows the requirements follows the accessibility requirements so what did we do like last year since last year I mean like we started testing the software with a screen reader or with a keyboard only to see what can be used or what can be like just like by listening to the orca see what or we can use the software currently and that allowed to like find a lot of cases where it was not that great like often there was tabbing back loops where you just type and you switch the cycle between elements but you don't go like over on the apps which is an issue because then the screen reader user or just the keyboard user won't be able to navigate the entire applications just with the tabs or the keyboard navigations and we started writing automated tests like I already said which is like really cool framework to work test and also like ensure what your software is accessible I mean it doesn't ensure it like you need to work on testing with like a screen reader but at least it helps a lot and yeah we had like multiple season of Calibre projects on that topic as I meant to rate Ritchie for Tocodon and Joseph did like work on Gekampi with Senon we also had like some blind users coming to the KDSS beauty channel and testing the software for us which was quite great like to have like because like when you can see what the software is it's a bit harder to imagine what the difficulties are but when someone comes and tell you yeah that buttons here I don't know what to do and then oh actually there's a Kigami server handle to yeah that's how you detect malfunctions yeah I really like after our testing we started like improving the software there was like many patches for plasma credits and as well as the applications like a good example is Clopatra Ingo as a talk I think later today in the room we hopefully like will give you a bit of technical details and we met Clopatra good for accessibility we use the Kali frameworks because like if you ensure that the Kali frameworks are accessible it's easier to build applications that are also accessible if you use common shape components we also need some improvement upstream and cute we send the multiple patches to cute to improve the accessibility for for the buttons for the press action something or the name was missing by default it was easy to fix like what we get like by testing the software we find easy things to fix and there's like a lot of areas where small improvements already make a lot of difference and we also send a few patches to the screen reader for Linux the GNOME project so it's good to see the software collaboration on stuff like that yeah so what did you learn yeah I mean like having the community vote on the goal doesn't make it magically happens like we still need to have more people working on the accessibility there's also like not that much documentation from the QT side about accessibility and what's something that we should try to improve on the upstream I mean there's like documentation for the QT class but there's not a lot of documentation or to use that, best way to use that or to make sure your application is good with accessibility and there's a lot of definitions and blog posts about the web accessibility but for QT there's not that much yeah so future yeah I mean like we should continue working on accessibility it would be great like if you're more people like we wanted to organize a sprint next year with the other goals and like for me personally like what I want to do more is what more blog posts like more community to make more people aware of the community how to improve accessibility like other documentation and stuff like that but it didn't really have a lot of time like it's always like I have many projects aside from accessibility and so it's great like if more people would join and help with that um no ah it's a bit suspicious let me go back how can you add like test applications like install an orca and try to use the application with orca and see how well it works yeah, working documentation is important to like document the best practices for kids for getting software general for the world like if someone could help me like what blog posts and stuff like that protect the current forest and the community would be really helpful and like as a tooling like think Volca work it a few years ago on gamma ray accessibility inspector it would be great to provide this effort to be able to inspect the accessibility state in gamma ray as well as overstate the visual state and everything yeah, like if you want to join there's a caddysvt channel on matics, I also have a label on ESC and we have a buff both on first day at noon and yeah, join us twins yeah, about it thank you Carl brings us to the third and final goal of sustainability with Joseph let's see presentation there you go apologies for breaking the aesthetics I tried to convert the template to la tech but it wasn't successful I'm actually Cornelius Schumacher he's the champion of this goal I'm here pretending to be him so the usual disclaimer applies any errors are my own yeah, I'm going to present about the sustainable software goal which is part of this sort of larger Katie eco initiative which started a couple years ago if you would like the slides are available to download at the sustainable software gold repository on invent I'll come back to this at the end if you want or if you have time now to scan it and I'm going to go over just a couple of the things that have happened since the goal has been adopted and voted for in October last year one of the things is the publication of the Katie eco handbook this was the culmination of the work done in the Blower Angle for Floss project which ended in March but it also coincides with the sustainable software goal and a lot of the topics addressed here are directly relevant for what the goal has so the handbook if you haven't seen it yet is broken up into three parts currently the first part is meant for a general audience about why is this important how does software influence resource and energy consumption the second part is about the Blue Angel eco certification criteria and how they align with free and open source software and the third part is how do you fulfill the criteria with detailed instructions about measuring software following the guidelines in the Blue Angel criteria and then fulfilling the other criteria in it that's the first iteration we want to continue expanding it and I'll talk a little bit about that in just a minute the as many of you I'm sure no ocular was eco certified last year with the Blue Angel currently it's the only software that's certified as resource and energy efficient that eco certification has opened up many new channels to KDE to present the work that we are doing one of them for example is in December at the open UK awards in the House of Lords in London this was an event organized by the open UK advocacy group which advocates for open tech the host of the event was Francis Maud who's a member of the House of Lords and he's the minister who a decade ago created the gov.uk website which provides information about open data open formats and policies regarding that a co-host was KDE's own Jonathan Riddell and he was there to also present what KDE is doing in the KDE eco project so yeah really big channels have opened up to us in this regard another one was Cornelius who presented at the Green Party event on green digitization which is a project by design this was an event that featured many prominent people like Kari Doctorow gave a speech the Germany's vice chancellor Robert Habek and Cornelius participated as an expert given the blue angel certification of Ocular in the right to repair workshop that was organized and this event and this post was a very popular post that featured in the Hacker news and there's a huge spike in the views in the KDE eco project website after this so it garnered a lot of attention and another project or several projects not just from KDE eco but from the season of KDE in general was reported on in Heissa DE which was a big publication for tech news in Germany it featured all of the season of KDE projects but the eco ones were particularly prominent this year we had three projects working on sustainability issues one was this tool that was designed by Emmanuel Charu I'm not sure if I'm pronouncing that correctly called KDE eco test and this tool is designed to make usage scenario scripting easy and robust so the existing tools that there are have various issues and this is trying to make it so that the process is simple like some of the tools but more robust because it's not based on pixel locations for the emulation but rather working on command line triggering of actions when you're trying to emulate user behavior this tool is quite limited and it's now been expanded to have many more features from Mohammed Ibrahim we had another project which was looking at the documentation for ocular trying to extend it to Kate in particular and that was from Rudox Carp and then as has been mentioned now by both the other goals the selenium testing so just a small correction I wasn't actually working on the G-Compre testing Ibrahim who did excellent work using selenium to emulate user behavior for G-Compre and as Emmanuel wrote in a blog post this is a project that actually hits all three goals Nate has already talked about automation Carl has already talked about accessibility it's also used for usage scenario scripting to emulate user behavior so that we can reproduceable energy consumption results for software right now there's a project going on in Google Summer of Code which is trying to make the lab that we set up last year in KDAB Berlin to measure the energy consumption of software accessible remotely so all this outreach is great but the actual reason we're doing this is because we want to measure energy consumption of software and drive it down when we can and this remotely accessible lab will make that much more easy to do so the idea here in the lab is that the lab is set up with dedicated hardware for measuring the energy consumption of software a power meter, an external power meter that's just measuring all of the energy draw when using the computer and then it's then aggregated onto another computer to collect the results and then you can analyze them and the idea is to set up a interface through the GitLab CI so that you can upload your code and then tell it you want to do this test it will then send the commands to the lab in KDAB Berlin run your scripts give you the results in a usable format and then give it back to you so that you can see the energy consumption of your software and if you're interested in eco-certifying it, this would also be one of the criteria for eco-certification so we're trying to make this automated and easy and accessible to everyone this is just a bit more details about it so the software will be installed as a flat pack bundle and then that's then run on the software the hardware at the lab and then this is then analyzed just another thing that we're working on is an awesome list for sustainable software this is again the sustainable software repository there's a great list of resources related to general green coding best practices as well as how do you measure software, what tools exist etc so check that out if you have some resources that you want to contribute to it please do so there's several talks that are taking place today tomorrow and next week related to the sustainable software goal so one is right after this Fulker is going to present about measuring the energy consumption of software tomorrow Harold is going to present Selenium GUI testing which is relevant for all of the goals and Monday we're going to have a buff for measuring software so come by if you're interested in the process of how to measure your software there are other many related talks this is just a couple that stood out for me so on Saturday there's the flat pack and KDE from Albert as you saw flat pack is part of the remote eco lab process documentation that's also come up several times and this is maybe not obvious how it relates to sustainability maybe it is I'm not sure if we want to achieve a sustainable circular economy for software and also hardware we need documentation for repair and reuse of software right this is the sustainable angle on that we can't use the software long term if we don't have the documentation for how to use it and repair it and etc another topic that I just thought might be interesting to think about in terms of sustainability the KDE embedded embedded systems are not the systems that were targeted by the blue angel certification the blue angel is trying to address the issue but the hardware is getting more and more powerful and software is becoming less and less efficient because of it it lets us get lazy embedded systems are the exact opposite you have a limited amount of computing power and you have to optimize to fit that and it has I'm sure many overlapping topics of the sustainability efficiency side in terms of optimization so I just thought I'd point that out and then there's another talk about technology information from solar panels directly into KDE plasma which is all happening in the next two days so one of the things that we have on our to-do list so as part of the Selenium season of KDE project Nitin wrote a guide which you want to add to the KDE go handbook as the next chapter we need people to test the guide to see what needs to be removed or what's accurate or inaccurate so if you have a chance and you're interested in checking out Selenium GUI testing maybe check out the guide and see how it works for you and give us some feedback on it another idea we've had for a while now is this idea of a KDEco badge so eco-certification is nice it's third party, independent of KDE but we can also do something that's internal and define certain criteria which is important for KDE software and if you fulfill these criteria you get a little badge that says you're fulfilling the sustainable software goals of KDE if you're interested in working on this please be in touch we've already started taking some steps towards an eco tab that would be included in KDE software so you have the about contributors tabs and then we'd add an eco tab which would then highlight aspects that are which are sustainable from things like eco-certification but also links to documentation source code if there are measurement data that aren't part of a certification process you can still link to it and that's relevant for sustainability issues if you're interested in working on that please be in touch if you have other ideas you're more than welcome to join we have monthly meetups every month and there is a matrix room and several other things which I forgot to put in the slides and you can find that information at eco.KDE.org it's quite easy to find and then regarding the sort of general unifying aspect of having goals one of the ideas that came up when discussing the presentations here was having maybe crossover presentations in the different groups that are working on automation to come into our meetup and maybe discuss ways that it overlaps with what we're doing in the sustainability so if you're interested in that or if the champions are interested in that maybe that's something we can talk about there's already mentioned a joint sprint next year any other ideas we can discuss in the panel and I believe that's it thank you thank you very much and now we have quite some time for questions about the goals or the goal process as a whole and I believe aid will be the mic runner we will run around questions I don't know if this is too detailed or not able to be answered but how hard was to get ocular to get the certification did something need to be changed many changes how was that experience to get to that level so the most labor intensive aspect of it is the usage scenario scripting and the measurement process which if you're going to start adopting Selenium you're already taking care of a big chunk of that work and as well as accessibility and automation so we can incorporate maybe some of this that work into the general development process and then the measurement is just a matter of having the access to a suitable lab for fulfillment everything else is just documentation free and open source software is recognized as being a more sustainable approach to digitization given various aspects of the way it allows users to have more autonomy about how the software is used which can influence energy consumption removing vendor dependencies so that you can continue to support hardware over a longer time etc etc these are all things I think we take for granted and this is obvious but this is putting it in terms of a sustainability angle and that part of the criteria are actually just documentation and understanding how you fulfill these criteria so thank you this is a hybrid conference we have questions from online and Nelfitos is going to be the voice of online here you go actually there was Alan asking a question about the lab and how do we make sure to be able to compare over time the power consumption of our software if the material use their changes and I see Cornelius stepped up and he's understanding it so I don't know if Joseph you want to add anything to that so the question was how do we make the results usable over time so that we can compare as the hardware changes what do you do to give it up to date to give the measurements up to date so right now in the lab we have a few different computers one of which is the recommended hardware for the blue angel certification and we have some others one of the goals, the moonshot goals is to have several options of hardware where you can say I want to test it on this hardware which might be a little bit older or I want to test it on more recent hardware but I think the answer to that is documentation of which hardware was used so that you have a maximally similar environment if you were to retest and want to compare the results directly with the test that you did previously great thinking Joseph and then there's another one for Nate it says wasn't there a prior effort at an icon design pipeline called Icona what happened to that that's a good question Icona was a standalone app that would have definitely helped for the icon design pipeline proposal this was an app that would be used to preview an icon that had already been made in various environments against various backgrounds the idea that I was bringing up on that slide was more to aid the process of creating the icons in the first place which right now is quite an error prone process and requires understanding of the details of how XML inside SVG files works so I think there's definitely some work that can be done at the end to verify the end result but I also think it's important that we make the process of getting there so that there is an end result a little bit easier hi I have a question about the KDEco for the measurement process of the application I can imagine that you somehow need to make sure that no other software installed on the test device interferes with the actual measurement of the application that you're testing how do you approach this so right now it's just very simply turning off anything that could interfere when you're measuring we have discussed ways to have a record a state that the software was in when you start the first measurement and then putting it back to that state for each measurement so that you have a maximally similar environment also as each measurement goes you're going to be changing the system a little bit right now the way we're dealing with that is just removing the configuration files that might have been changed in any files that may have been produced during the measurement so that it gets back to a maximally similar state but that is an area that we certainly could look at later once we get everything set up and the first step is achieved and then we can start thinking about how we can improve that process so I have one more question regarding the measurement lab like do you have plan to add to support for mobile devices because like plasmobile is the one of the things we support and like that's where actually the energy efficiency is more useful because like you don't want your mobile devices battery to drain completely so do you have plan to support mobile devices short answer no I mean so right now it's not what we're working on right now but the blue angel certification is extending their criteria to include mobile apps and client server systems and they might have some some tooling there that could be useful for measuring mobile apps at the moment I don't know how we would do that in our lab with the setup that we have so okay thank you I have one question for Carl you mentioned that accessibility is also very important if you want public institutions to select your software do they have some requirements like you have to have a certificate maybe or do they do their own testing or is it just we tell them we made sure our software is accessible I think it depends I mean like for the web there is like the standard VCRG standard there is like multiple levels and usually institutions want to level RR at least and then there is some companies who will do certification for that I was involved in the past with Nest cloud I mean it's usually a certification like you need to I have a question to Carl how's the the status of the accessibility in the way landing by a month I'm using the Wayland full time we could use Okia it's working but I think we still have a few stuff that would be better to improve but I'm not familiar with that area we've sort of got an answer here so there are definitely some things that don't quite work properly accessibility wise in a Wayland session a lot of the keyboard related accessibility features like sticky keys, slow keys that kind of stuff that used to be implemented by the X server but on Wayland that's not a thing anymore so we get to implement that ourselves in Quinn and a lot of that is missing I've recently worked on some of that but it will take a few more patches to get it on parity with X there I like this setup where we get answers from the audience so maybe the panel has a question for do any of you have ideas how to measure mobile systems in a lab we got an answer yeah we build a custom so we hacked an Android device so that we removed the battery we basically build a fake battery which is connected to the power and that's how we actually using the power consumption so like it's on battery because of course most of the Android device cheat when they're on battery they remove some power features or add them we had the issue for example for VLC where we were higher priority when we were plugged on an actual network so that's how we did it and so these devices then we have a RS232 to just read the timings it's a mess let's talk anyway at some point to add to that answer ARM incorporated has these giant labs of hooked up devices with relays to reset them there's hardware setups for that so any questions or answers so I'm not directly involved but collaborer runs a lava lab to do a bunch of the tests for Chromebooks and Android so they may have some there's definitely some ways to use lava to do that kind of thing let's talk so next up we want so again from the internet regarding any cost certification there's a question is there an ongoing measurement that is taken each time the software is released to ensure there are no minimal regressions in power usage over time let me just make sure I understood so is there an ongoing measurement process so that we can then see if it's improving or getting the way I understood it after you get the certificate is there something ongoing for the certification you need to measure it regularly the exact details of that are a bit in a gray area but yes you're supposed to measure it regularly and right now our approach is that we want to measure major releases but first we want to get the lab set up for easy accessibility so once we have that then the idea would be then for major releases to have regular measurements and that's required for the certification as well thanks and maybe sorry just one more thing to add to that the eco certification requires that it doesn't increase the energy consumption doesn't increase by over 10% during the time of certification from when it was certified so there are requirements on staying within a limit so that means we could lose the certification at some point right okay so we need to be careful yes so we got 10% worseness to play with make the world a better place with your question it's just a small follow up question like do we need to redo the entire certification process in case we lose it or do we just go under the threshold of 10% certification again good question I don't know hopefully we don't find out but you're going to motivate me to find out so I'll see I can find that information so my question would be do you do tests on a certain device whether using plasma and KD software extends the life cycle of for example for an organization is are there some tests to show how much e-waste can you not have if you use plasma then if you use Apple devices or Windows devices no the requirements of the certification you have to state the minimum system requirements and demonstrate that it can run on hardware which I think is way too low one of the things that I have in mind that I would like to work on at some point is a campaign on e-waste reduction with free software and I think gathering data about hardware that is no longer supported by the two major vendors and can still run free software and getting some documentation of that would give us at least some idea of which devices if not the numbers of how many which devices would otherwise end up in the landfill but can remain in use because of free software so if I could add something to that answer something a little bit more on the concrete side in the town where my parents live my mother is involved in a community organization that is actually doing exactly that right now they currently have 80 desktop computers that are over 15 years old and are based recently and are instead being loaded with Kubuntu and used to teach computer classes and given away to low-income students so there's a specific example of how using eco-friendly software can directly prevent e-waste mine was instead to jump again on the accessibility topic I just remembered another partial answer to how is the accessibility status on Wayland another thing specifically to Orca the actual screen reader part just works for the most part one thing of Orca that doesn't work at the moment I think unless it was fixed recently but I don't think is that it has a feature that can do something like putting a rectangle over the button that is now reading that at the moment cannot work on Wayland for the absolute positioning stuff and there isn't a clear solution yet I guess the final solution will involve somehow as usual a new protocol as every Wayland problem but that will need to be thinking about having the same issue with the ASSER which is like the software where you can analyze the state of the accessibility of the e-program and you can actually choose a rectangle and that doesn't work on Wayland thank you we've got time for one more question you've already asked one, you're new I think we even can do two we can do two so this one's kind of for all of them, although it sounds like the eco-guys are already doing it as an app developer is there plans to create check lists of stuff that I can be doing to help to meet the accessibility goal to meet the automation goal sounds like the eco-guys are doing it with the handbook already that's also the idea of the badge which is something we really should consider setting up a sort of if you fulfill these things you reach a recognized within the community a certain sustainability goal yeah but not yet, that's definitely something we should do I think it's a good idea too all of the goals can probably benefit from this there's a lot of overlap if you do the Selenium GUI testing you're basically doing all three at once from the automation perspective there's the obvious stuff like have a continuous integration system do testing, make sure that your tests are passing things like that for the accessibility it's the same thing maybe like a check list yeah and the very last question thank you I heard you mentioned before that you have an energy budget of 10% you could lose the certification if you go over it you're not sure is it possible that these measurements and this process to certify this it's scalable to be automated so that when you merge or when you do a new release it's like part of GitLab and how scalable it is for it to be part of the release process both of Ocular and other plasma applications so how can we integrate into the development process so that's the idea of this CI runner that we can have then if we want to measure the amount of time you can then simply make your merge and then test in the lab that's sort of the goal with that do you think that it's something that is going to I'm going to merge something and is it going to I'm going to with like 5 hours to get the answer like how's the if yes but we can probably talk about that but we can certainly design it to fit our needs if you want to do eco-certification you need a certain number of measurement runs to have a statistically relevant measurement but if it's just for a quick thumbs up, thumbs down kind of measurement I'm sure there are ways to shorten that process so we can talk about how to design it so that it fits our needs then thank you very much if you have more questions during lunch can I get a radio for you? thank you so much but it's not lunchtime yet we're splitting into two rooms Volker I think is in here Volker is here but Volker is also here in the next slot there's Ingo I think on the other side and I would like to remind everyone that if you are interested Qt is hiring you can talk to them at their stand outside everyone who successfully arrived at the conference did so thanks to KITinerary which is not the topic of this talk so we're going to talk about energy consumption instead Volker so yes following a bit on the previous session how can we actually measure the energy consumption of the stuff we built to get started why do we care about this yes there are multiple reasons there is for example the battery use and battery power devices the mobile case that we discussed earlier so if you use less energy there the battery lasts longer clear tangible benefit for the user another thing is performance all the electrical energy we put into a device comes out as heat again and quickly enough the device water is down so if you put less energy in that happens less often we get more performance out and then of course the climate impact of energy use on that subject I can recommend you to watch some of Joseph talks if you haven't done so already here are some actual numbers in there I think the IT impact in the same order of magnitude as transportation and aviation combined so that is a lot and worse it's rapidly growing there is definitely something we need to do about that how does software impact the actual energy use so how can we change this the obvious thing every instruction the software sends to hardware requires energy to be executed on any kind of hardware but there is also less obvious things beyond that for example power management settings there is hardware that you currently don't use that can be switched off a lot of this is transparent to all level of software that happens somewhere on the driver level but in some cases that also benefits massively from more context say for example you can switch off the display backlight much more aggressively on the lock screen while compared to when you are watching a video then you probably shouldn't do it at all so we can also provide useful impact on that level somewhat related to that the idle behavior of software actually do when it's supposed to do nothing more often than not it's actually not doing nothing but it's continuously looking if there is more stuff to do and that means the hardware actually wakes up checks if there is stuff to do and that means we don't get to the highest power save states and again waste energy there realistically we are going to talk about very small quantities here so is this even worse the effort but even small quantities matter here if you reduce the idle consumption by half a watt on a modern laptop that can translate to 10% more battery which might be another extra so that matters and there is more effects that support this the some of the effects in there aren't linear especially towards the extreme in the high performance part as well as in the idle area even smaller changes can have a disproportionate impact and then there is the scaling effect that's again something you might know from Joseph's talks here some actual numbers in there because this is not a one-time local optimization it multiplies over time and multiplies by number of users and I'm not sure if I remember the numbers correctly but I think within reasonable like our level of distribution and scale I think you got up to the energy use of a small city within a year that is actually a sizable quantity and that is our level of scale if we think about browser engines that is a completely different level the flip side of that however is if we make mistakes they are also going to be quite costly this is important this matters we can make a difference how are we going to or how are we actually going to optimize our software and like with any other optimization as well the first step is we actually need to understand what this software does because what this software is supposed to be doing, what we think it does and what it actually does more often than not are three different things so we actually need to have a look on what's going on on Linux our go-to tool for that is the perf profiler and so we run perf record with the application that we want to measure perform some usual operations in that we get the profile file out, open that in hotspot and look at the details we're going to see a few screenshots in a minute that works for looking at the specific application for the idle behavior we want to see the full system view the specialized tool something like power talk that gives us the overview for that and this is the result we get in this case so this is a flame graph view on the horizontal axis we have cost or in this case like time spent on CPU and vertically we have the call graph so you can dive into various parts of the application and see what's going on there now seeing stuff in there doesn't automatically imply that it's bad this is something that you look for things that shouldn't be there things that are surprisingly more expensive than what you would expect that is the interesting things to look at another useful view in there is the timeline view so each row in there is one one thread horizontally is the time every orange marker in there is activity on that thread over time and as we would expect during startup you see quite some activity and then towards the end the application sits idle and we would expect to not see any activity but there's at least two threads where there's still something going on and again that would be something to look at because we expect the application to not do anything at that point in time so what's going on there in hotspot you can basically select that area of interest and filter on that and then you get the flame graph for just that particular area and then again maybe it turns out that is necessary or it turns out that this is something that shouldn't be there this is the output of power top that is just a table the terminal list of processes sorted by the amount of wakeups paid for per second same thing here this will never be empty and zero there's always activity going on it's about finding that activity that is avoidable unnecessarily or could be done cheaper so you will see Quinn updating the screen because the output you are looking at so that require the screen output one example that we have recently in there was a cave background window showing up with 30 wakeups per second that turned out to be an animation timer that wasn't switched off while being in the background so that is the start to look for so once we have identified things that we can optimize how are we going to do that if you are lucky you find something where you can just do less that is always strictly better but those cases tend to be rare usually it's more about making a different type of trade-off let's take an example you have some expensive presentation you do this twice and only use the result of the second run you just don't do the first one easy so let's assume you use the result in both times then one possible optimization would be you just store the result somewhere so if the result is an 8-byte number that is most likely always going to be a good trade-off if storing the result means 100 megabyte of I.O then not so sure anymore so we need a way to to qualify those trade-offs so is that really better and this is somewhat complicated because it can also quite easily depend on the specific hardware so if you optimize for making the software faster we usually do this by measuring the time it takes until it's done so if we optimize for energy that's the point where we then actually need to really measure the energy and that is really the only viable approach there a lot of this stuff as I said earlier doesn't really behave linearly so don't really trust your intuition measure this and then decide which one is better so how do we actually measure this then I'll focus here on stuff that you can practically do at home so anything that requires special equipment or equipment with price on request price tags that is out of scope so if you are coming from an electrical engineering background some of this might look weird but that's the stuff we found for home use so in terms of where can we actually measure the first option is basically the AC side so the power plug that has the advantage that you really capture everything that goes in you don't miss anything it is a bit of a problem for battery powered devices so when we talk about measuring a mobile because you might also see the battery charging or the battery might hide some of the use by buffering that away so yeah ideally you just manage to remove the battery entirely and then there is the small problem with safety AC power is deadly so if we can avoid that let's look for other options the secondary side so basically the other end of your power supply with the rise of USB-C unfortunately that is usually not practical anymore because you need specialized equipment to measure on USB-C and on your workstation it's even worse you know this giant power cable that comes out of the power supply in theory possible but not practical or at least not practical for what we are looking at here which leaves then the third option and that is built-in sensors those are particularly widespread on I think standard Intel hardware and some of the AMD hardware as well so pretty much anything sold in the last decade will have some built-in sensors on battery management systems you usually also have them but they might be tricky to get access to if you have them they might actually be a very viable option to measure on mobile because the big downside of those sensors is they only capture a partial view so if you have a sensor only measuring the CPU consumption anything else happening on the system will be missed for the battery management sensors that is less of a problem because that really captures most of what's going on those sensors also tend to be extremely precise to the point of being like a viable side-channel attack factor and on the other hand you're kind of measuring your measurement with this right so if you have like a compute heavy workload the measurement might ground the cost but if you're measuring an idle scenario the measurement itself might be the most costly thing running at that point in time so that is something to keep in mind so I think that part we have to skip because that slot turned out to be only 30 minutes instead of 40 minutes so let's get to some practical devices you can use for this we start with stuff I wouldn't recommend you get but stuff you might find around and that is still better than nothing maybe so the thing on the left is usually something you find in a hardware store sold for as an energy monitor or something like that for relatively cheap those devices aren't machine compatible which means they give you an update rate of more towards a minute and a second which is way way way too long but if you manage to stretch your measurement sufficiently long that might still be good enough to verify the power management setting actually have an effect the other type of device is like server management power strips remote switchable sockets those sometimes also have power management sensors and they are internet connected so machine readable update once a second so that's already much better just the price point is usually unreasonable for what you can get out of those devices if you have that it gets a drop down more attractive is something like this this is like a very cheap wifi power plug sold in a smart home context those also have power sensor built in and there is a free software firmware called TASMOTA that works on those devices so you can get rid of all the cloud nonsense and get high resolution access to the sensors it's however not really built for our purpose but we managed to get like five samples a second all of this with some MQTT polling hacks if there was somebody interested in like C microcontroller programming I'm pretty sure the hardware is capable of much more and we could get much better results if we wouldn't have to abuse some MQTT protocol stuff for this but got a steady stream with proper time stamps of the sensor values cheap and hackable very compact that is probably as good as it gets slightly more expensive but actually built for the purpose we actually want to do here so really just for measurement we have this the RQM PowerSpy 2 that's something the Mozilla people pointed us to one downside of that thing is it comes with an ancient window software and we have only reverse engineered half of the protocol so far so for life measurements we can use it but we don't get to all of the other features of that yet significantly more expensive than the other one also interesting we got to work with during one of the sprints is a development board for the actual sensor chips the sensor chips itself are super cheap a few dollars a single piece price so if you're into electrical engineering and building boards around that this is great and this had like a 1 kHz sampling rate over USB so much much more towards what we actually need and want however this is basically an open PCB running AC powered with so okish in a lab if you know what you're doing certainly not something you want at home on your desk there's more kids around so this is something we could get out of I think this was done with one of those so it's basically a plot over time for the energy use you can see that this is rather noisy that's not the fault of the sensor the reason for that is in the slides I had to skip it's massive under sampling on the high frequency of a switch mode power supply that's why I'm so obsessed with higher sampling rates but you still see some some some changes in that signal right so in the middle section between 100 and 150 you see that the signal overall is slightly elevated there is of course the clear peak in the middle and what was done there is that in that middle part the mouse cursor was moved and during that peak it hovered over the taskbar and showed a twitter that is not meant to say that mouse moving is bad but it shows how sensitive those sensors are we really see operations on that level in there which is extremely useful if we want to verify if our changes had any any effect then for the built-in sensors as I mentioned the most common one is I think the one from Intel Rapper running average power or something the way to access them on Linux is also via Perth with PerthList you can check which one of those are available on your system if you have an Intel laptop you most likely will find a few of those there is usually cores which is the compute part of the CPU package which is the whole CPU carrier with L3 caches and the IO controllers and whatever else is on there sometimes you'll also find memory and GPU with separate sensors those can be read out with Perth.org with the Pinpoint tool I like the letter because it gives a continuous dump on the measurements which you then can feed into the laptop or anything else to visualize this both tools however make it kind of look like you're measuring the energy consumption of a specific AdBo workload that is not exactly what's happening there but you can always measure the entire energy use of the system so you can put in sleep 1000 as the workload and do something else and you will measure that something else which is a creative just need to know about that and this is barely visible and you can see that the CPU load is piped into the laptop on the lower half on the upper half is the CPU load you can see that this is somewhat correlating so if the CPU load goes up the energy consumption goes up as we would expect but you can also see the peak in the middle where the energy consumption some peaks where the like the energy used by the memory goes up disproportionately and that is of course then the interesting bits because there's something that behaves contrary to expectations or intuition so again this is mainly meant to show you the level of detail you can get all of this with the sensors and right then if you have access to neither of those we can also measure in the cloud the sum of code project from Kevin Jott that Joseph had spoken about in more detail earlier so the basic idea is the ECI builds you the flat pack of your software test driver a Selenium test driver for this we run this via GitLab on the new angel certification setup it's run there several times to counter the noise and other stuff interfering and then it applies some statistical magic to get a bit of this rate and then it produces a report there's the results and plots and whatnot and then you can see this takes a long to run it after every commit but it's something you would run after major changes or before release to verify that you don't have the questions right yeah so how do we put all of this together right so the the first step is just doing general profiling and optimization you'd be surprised by what you find in there it helps with making the software faster for the user and it helps usually at the same time also with reducing the energy use if you are looking specifically for the energy use the biggest impact of course is on long running processes and there's two things in there that are interesting right so there is continuous or frequent workloads so for example media playback probably has a higher impact than say opening the configuration dialog and keep it just because one usually runs over hours and the other one you do a few times a month and then the idle behavior is the other thing to look at system wide because every single wake up comes so whatever we can get rid of there is overall helping and the other thing you probably saw from all this repurposed random other devices and weird command line hexes the tooling is extremely basic this is nowhere near what we are used to for profiling for other functions there is no you basically end up doing some manual calling on that device and piping that in a CSV file and opening that in as a life source and then trying to put things together there is a huge amount of stuff that could be could be done better and could be just also become more accessible for us as the users of it by not having to figure out each time how you put this again together as Joseph mentioned we have a bot on this on Monday 10 o'clock anything related to the sustainability goal I would say but I have some of the measurement devices with me so we can share one with those or you can try them if you want to that's it we have about 5 minutes for questions so questions answers is this more of a comment than a question it's a question in power top do you happen to know what that tick underscore timer entry that's always at the top actually means I think that's some kernel thing but I don't know more detail about there are things that you take kind of for granted if you're developing like say moving things from CPU to GPU is always a good thing because it's good for better feeling is it still true for power that's good or bad that's a very good question I think it's on the slide but I didn't talk about it solving the same problem with a different kind of hardware that is exactly the point where you need to measure because there is no documented best practice it very much depends on the hardware usually and it's often contrary to the introduction right there is some documented research from the HPC people because for them this works at a completely different scale and I think there GPU usually outperforms CPU but for our use cases I think it's always measured it's simply not known what is better for very specific workloads okay so I have a question do you have any ideas about the real time measurements like for example you have an android you can see the power consumption of different applications and do you know what do they do and how you can use this maybe on desktop that is a good question I don't know exactly how they do that I mean this whole topic of attributing energy processes I know that there is some research we have been at some meetings with people looking into that a lot of that is just proportional use of the CPU and then more or less guessing there is to my knowledge no way of measuring this with any kind of certainty it's just on average probably it correlates with amount of time spent on the CPU and I think this is what android does it's probably close enough just because of the big differences in there and it doesn't really matter if it's a few percent off but proper strict attribution I don't think there is the means for that Hi Valker here two questions from our remote participants one is are cheap wall outlet meters do you have certifications safe to not burn down your house I mean those are sold as consumer devices with all the necessary certification so they are generally safe I would say and we don't modify them those all are the reason why also the more professional device has a Bluetooth interface because it makes the safety easier if you don't have a low voltage signal cable coming out of something that runs AC power then you solve a whole lot of safety problems so yeah I mean all of those have the CE certificates or whatever you need in your country I don't want to take any guarantees on this is 100 cent safe but this is like off the shelf standard hardware made for home use so I would say yes as safe as it gets Thanks and the second one do we have a general guide with Katie to know how to do a general profiling that is a good question I don't think we have a like single place go to thing for like a profiling tutorial but I think there is a lot of material out there and there is a lot of discussions talks and other tutorials on that general subject it might help if we collect that and point people there at least that part is missing yes and last question I think welcome from Heinz hi so this was a lot of talk about measuring usage on the CPU or usage on the GPU from the built-in sensors if you have an Intel CPU with GPU built-in that will also show up as a sensor I think the one of the lines in here the blue one is actually GPU I'm not sure if you get the same level or dedicated graphics cards so that depends on whether they give you access to the sensors they have I'm not sure if we have that but for the built-in embedded Intel GPUs we get that yes alright thank you Volker people asking questions thanks for the online participants for the real life participants here it is lunchtime get out into the hall you can talk to the cute stand they're hiring and you can talk to the sandwich people they're feeding chooses EV board reports the audience chooses nothing you are required to be here it's great there is a million reasons to choose the board there is you can vote for us so welcome back from lunch this is going to be the report of the board and I'm going to give this mic away to Ingo so that I can take a seat there maybe you want to keep it and then we have four microphones let's share as if we were friends feelings nothing more than no not today so let's do this yeah well thanks for joining now we're going to do the KTV board report this is something that we used to do within the HM but then we realized that it was a good idea that everybody who also is not part of the KTV could join and sit so that's why we're doing it here on the main program of Academy for those of you who are EV members there will also be a version of this in the HM for like I don't know all of the information before we get started let's do a super short introduction of ourselves we can do it with the first slide if you want so you can see our names I'm going to start myself, my name is Alish I am the my name is Alish I am from Barcelona and I am the KTV president nowadays Hi, I'm Adrian I'm a roving board member I guess that's all Lydia Hi, I'm Nate Graham I'm another roving board member I'm from the USA, I was just elected last year and I'm happy to see all of you on the board for quite some time currently vice president and we also would have IK here but for personal reasons he unfortunately can't be here so we will do this with the four of us Hi IK if you're watching us he knows everything we're going to say anyway so in this AGM there's three of our roles and this year it's going to be Lydia's, IK as well and mine the three of us have announced that we're running again but if any of you wants to join and fight us in a voting battle of democracy you can do that it will be very interesting and fun for all of us I'm sure cages will be provided if you want it to be a cage match bludgeoning weapons only yeah more on the practical matters of the EV this year we have six new members we have one new member less than last year but a good bunch of new members less than a couple of years ago well if you know somebody who is doing good kitty work and think it would be a good fit remember to invite them it's always a good idea but for now let's thank Raju, Neil, Gumpa, Justin Felix, Natalie and Simon thanks for joining the kitty V this year oh yeah about something I wanted to mention also a big thanks to Adrian who has been a big topic in the past and we hope that with the new work now it's going to be amazing thank you the button got pressed spoilers no the button got stuck well it's the next button it has like two buttons this time okay yeah that's good so one of the big tasks we do besides of you even members is having like individual supporting members who part of what they do is donating there's been like big changes on how donations happen the individual supporting members are still on the previous system and they're being migrated into new fun ways of doing their stuff but we also started using over the recent years and more recently even Donor Box and that's also ways that what people are starting to work to this individual supporting process yeah just the one, yes so those were the individual being like individual people who do it for their own sake these are organizations that participate obviously in a kind of different amount we've had a good number of new ones for the last year and thank you very much to the three of them being Coupon to Focus, Titan, and Ambition but not to the mini-survives the systems Google, SUSE, the Keep Company Canonical, Slimbo, and Tuxedo all of them are KD Patrons being organizations that are part of the KDV and they take part on other spaces like the advisory board for most of them who will get there also we got a new supporter it's one of the minus two they moved from Patron to Supporter the advisory board so the advisory board is a way for organizations to have a say with KDE, not a say in KDE they're not steering but it's good to have that conversation in somewhat private to help those organizations understand where we are going and those organizations can say something about where they would like us to go what their interests are the advisory board consists of both sponsors, patrons such as Blue Systems and Canonical organizations that make heavy use of KDE software the City of Munich and other interested organizations so these are people that we talk with with some regularity and the advisory board calls where we can hear each other out on what is interesting so we'd like to thank Blue Systems, Canonical, the City of Munich, Debian, Fonsnade the Free Software Foundation the Free Software Foundation Europe Open UK, you saw PictureSexito and our new advisory board members Kubutu Focus, G10 Goat and Ambition so that's the advisory board besides the advisory board being the place where we can talk to our supporting organizations we also have other friends partners and affiliations this is ongoing collaboration because it's technical sometimes because we've shared an office or we just feel like doing cool stuff together so Lix and the Qt Project and the Vanda meetings are our friends that do cool stuff KDE Spain is the local organization in Catalonia and Spain and they have a Academy ES a similar kind of event as to what we're having here we're partners with GNOME of course to organize the Linux application summit that's once a year get together to talk about applications going out there the Chaos we are part of the application ecosystem working group, Neofotosis and one of our friends there affiliated with the Free Software Foundation Europe and the Open Invention Network those are free software promoting organizations and the Open UK that do things like patent protection licensing innovation KDE is as an organization also a member of a number of things the OSI, the OIN, OASIS and the Document Foundation that means that we have our little say in how standards are built we have a little say in patent protection for the Linux ecosystem and that's all very useful so that's the way we're involved in the larger ecosystem around us let's switch to a smaller view KDE is no longer a purely volunteer organization we're really happy to have a bunch of contractors and employees that help us achieve the goals of the EV and the goals of the EV are to support you, all of you in getting cool KDE stuff done so first and foremost we should mention Petra who does all the things we will go through these in more detail briefly but I'd also like to mention Joseph, Annika, Paul, Adam, Dina, Diago, Ingle, Natalie and Nicholas and I will tell you all about them in the next slide so Petra I'm sure many of you have come across she's incredible support for the board maintaining our office making sure that you get your reimbursements that our paperwork is in order all these things really important for the EV to run similarly Dina you might have come across her amazing help in making sure this academy runs together with the local team and the rest of the academy team as well as the Linux app summit we couldn't do this without her support then we have Adam who is supporting everything around the goals making sure that the gold champions have what they need that they can meet every now and then and discuss and the same thing with the academy team supporting meetings and so on if you at some point meet project coordination support let me know and we can figure it out which brings I'm sorry which brings us to Lada and Joseph who have joined us for the KDEco program they have done amazing work in making sure that KDE is taking part in the project that all the paperwork gets done that all the community outreach gets done that we get the eco certificate and so on the funding for doubt has by now ended and we are looking for new funding Lana unfortunately left us due to the end of the project but we hope to work with her again and I think that Joseph has been focusing more on community topics which he will talk about in a talk later today or tomorrow I'm not sure you should turn it I think we skipped one yes we did yes next up on our list of contractors we have Paul and Anika our marketing team Paul and Anika have been responsible for helping to improve KDE's presence around the world make sure everybody knows about us this year has been quite a busy year they have been doing a lot of work around the fundraisers and communications regarding that the KDE network is a topic that has seen a lot of work as well to make sure that people are organized around the world they have also helped a lot creating content for external events and helping to promote those they have been updating KDE's presence they have made the jump over to free alternatives know about us and are able to hear about KDE news and they have also helped to create the four kids and four activists and all sorts of other people pages that you can find on kde.org these are really cool pages and finally they also help out with release notes and release announcements for our big releases KDE gear so a lot of work and next up we have our newest contractor Natalie who is our hardware integrator she has started just very recently she has been working to improve the user experience in plasma on devices like the types that are shipped by many of our partners we have a lot of partners and supporters these days who actually sell hardware which is a really exciting topic to my heart it's something that I like to see and so Natalie is helping to make sure that this hardware works better with our software she started with PowerDevil and general power management topics which are always very important especially for devices with batteries this also is a part of the KDE Ego initiative so there is some cross-pollination going on there and we are going to be seeing a lot more from Natalie soon Ingo we did announce last year the first of the make-a-living positions we created including Natalie and then Niko that we are going to talk about later and well Tiago I think that we also put in the back at some point Ingo has been working on bringing or making it easier for all of you to achieve your applications in different platforms there's been good progress on Windows Store and Android so far we're starting to see how to finalize that we're going to have a puff right Ingo over the week actually yeah, right Ingo because it's on the slides well be there if you have an application that you want to see shipped and reach out to Ingo if there's something that you think that could be done at like a KDE level to help you do that but it's very important for us that your applications get out there and hopefully like help fundraise somehow and become part of this well healthy cycle of or have your users making KDE bigger right and Nicolas he started he started this year around February and we've been working so the idea for this position is to be working on the different aspects of our software that are shared and help all of you to work and create your KDE software at the moment this has mostly meant to help with the port and everything to the sixes, the kid six and primary six plasma six and everything in the six now so that's what we've been doing so far yeah, six six six that's what we've been doing so far I will see what the future is going to bring us in and the last of our make a living contractors is Tiago Tiago picked up the long running documentation improvement project we spent some time a long time ago examining what does our documentation need to be really world class Tiago has been picking up individual topics each the last month or two to improve so he's improved the Kirigami documentation has been annoyed at Kconfig haven't we all so he's he's available for writing and general documentation review feel free to reach out to him if you have documentation needs that's it for the contractors and we will move on to events because KDE supports well contractors pays for contractors to improve the KDE software but also with events because events are great because it means you can actually look people in the eye and make funny gestures at them yell at them in person to fix your bugs yeah yeah see where else would you give Kevin to stick out his tongue at you so we're really glad to have all of you here at Academy, Academy is our yearly our biggest our farthest flung event big thank you to the sponsors who have been here one of the topics each year at Academy is where's the next one going to be so if you feel the urge to organize it in R&M then do so anywhere else in the world given the temperatures here right now I think we want the next one to be in Norway in November but that's up to you feel free to organize a new one we don't just do the big events we also do sprints sprints are far more focused small events that pick one special software topic or organizational topic you see that we have EV board sprints that means that we get together either in real life or online and spend a weekend working on administration there's been a KDE Plasma sprint there's PIM which traditionally tries out the cake in Toulouse there's the KDE Eco Sprint we've had a we've had a handful of sprints but not nearly as much as we'd really like and so I'd like to repeat the call to everyone in the KDE community sprints are for you and we're here to make to help you make those happen so if you've got a topic reach out to us, reach out to Adam and we'll make it happen next to our own conferences and our sprints we also of course show up in lots of other places so for example so far we were at Foslum this year KDE Eco was present at LAS and Bundestag we went to something very new the All-American High School Film Festival to get a bit out of our bubble to talk to other people about cool stuff like KDE in life Latine Wear Qt World Summit Qt Comprazil Megacon another one of those things to get us out of the bubble if you want to represent KDE at events you should totally talk to the promo team the board to get help for that if you need it on the side of conferences of our own the last year we had the Academy last year the Linux Upsubmit this year Academy ES and of course this Academy where you are and to make all this happen we need money which fundraising gives us there are quite a few things happening about fundraising as we already talked about we've got new patrons with Kobuntu Focus G10 Code and Ambition together with the fundraising working group we refocused our donations on the new platform Donorbox we ran an end of the year campaign we ran a fundraiser for KDE in life both of which were very successful we have started thinking about how to rate partnership prices starting in 2025 because membership or partnership in KDE EV is comparatively cheap you can still get in before 2025 in the cheap prices tell your boss grants have been successful in that we got the KDE eco grant so far we've unfortunately not been successful in getting a follow up grant if anyone can help with that please come talk to us and Joseph and last but not least we're thinking more about how to actually get paid apps into the proprietary app stores as a sort of revenue for KDE EV and of course as the saying goes do good things and do good things so we're thinking about what to do with them so we also have to tell the world about what we do as the EV and we published a report for 2022 just the other day it's a long and beautiful document with a lot of behind the scenes looks at what's happening in KDE EV and KDE so I already mentioned the fundraiser for KDE life and at last academy we talked about how we want to work with the KDE life team as part of this whole make a living discussion that we try to find dedicated funds for them to improve KDE life as a kind of test balloon to see if that is something that works for KDE EV and the team that wants to do dedicated fundraising the fundraising side of that has proven fairly successful and we're quite happy with that and we're quite happy with the effort that both the KDE life team and the promo team but also the fundraising team have put into this to make it a success because it was quite a bit of work now we have the money and now we need to spend it on improvements to KDE life which is one of the next things that will come up and I'm sure you will hear more about it tomorrow in their keynote so overall this trial dedicated individual project funding continues but things are looking pretty optimistic right now so now is the time when we talk about some of our favorite things the highlights of the year everything that you all accomplished with our help with some really great things we talked a little bit earlier about sprints we were very happy to see that sprints started happening again after the isolation of the COVID pandemic which was no fun we definitely want to see more sprints this is something of our bread and butter so let's see if we can do some more of those we also concluded the Blower Angle project the funding has run out but it has essentially been successful we got an app certified and we've created a lot of interest and momentum behind it I think you saw a lot of engagement about KDE EV and the goal topic before it now continues as a community goal so that's really great we also have our fundraising platform Lydia just spoke on that a little bit but moving to donor box has been really hugely impactful for us and getting a modernized fundraising platform gives us many good options going forward for the future we also have acute six sporting which is going on this is something that's been happening in the background for a while and now starting a few months ago it's happening in the foreground so we're really very happy for all the work that people have done on here I want to call out Nico in particular who has put really a lot of effort into this and the net result is that many people in the audience here myself included are able to actually live on plasma six get master right now okay I'll shut up now it's pretty great and we're also in general seeing more interest in our hardware among our hardware and software partners and one of the things that I missed something that was behind the microphone we filled all the make a living positions this was a multi year effort to start hiring more people and we have now done it so we did a lot of the things we said we were going to do and it's pretty great next slide so what we said we would do number one was reassess our financial situation we definitely wanted to move towards more sustainable funding models and we also wanted to make sure that we were able to keep out of legal hot water we have now succeeded in doing that we have also modernized our donations platform so on the fundraising side in general what we set out to do has been a success we trial project fundraising as Lydia mentioned a little bit ago this has also been a success we're going to be using this experience going forward to do some more things that's very exciting we also wanted to make sure that our infrastructure was able to get lab what has been proceeding everybody is using it now and in the near future we're going to finally 100% move into that cutting off fabricator so that is going to happen as well the next thing is to support the new KDE goals this is something that's been happening over the year speaking personally as a gold champion I feel very well supported I can't speak for everybody else but that's at least one out of three there's going to be time next year possibly the spring, summer keep an eye out if that's something you'd be interested in attending there's going to be more information about that soon in general we would like to see a little bit more community involvement involved in the goals these are definitely community goals as nice as it is to have gold champions with lots and lots of time to work on it sometimes that isn't always possible and so it's great when people take the initiative too so that's something that we would like to see in the coming year we also want to learn a little bit more about our changing ecosystem with regards to how software is distributed and what our position is in the greater free software world this is something you're probably going to hear about in many other forms over the coming conference and we're going to continue to keep an eye on that and finally a big topic this year was to get us all back together in person now we can pretty much all say that the global pandemic is finally over and it is very nice that we're all able to sit here in a conference and enjoy each other's presence so that's pretty great and I'm happy that we've been successful in that so a topic for next year you will see that there's stuff that continues over the years since last year it's not because we think that it wasn't successful but a lot of topics like span over a good amount of time like Nade was saying, kitty goals are working very well but we do miss people joining them it would be interesting to understand what would be the correct dynamic for that to happen because like Nade I was a kitty goal leader for a while and it could feel lonely at times it's a very social issue but I think that from the kitty view we can have that happen and that's been finding the right way to help the goals it's going to be a good source for success for the goals which are in the end a success for kitty itself the financial situation that we keep talking about we had big injections in the past that we had to learn to manage we've done the make a living program that has increased our spending considerably through donations through everything and that's what we're doing and well there's no pandemic anyways but we do believe that meeting is part of the goals for the kitty view turning the kitty from something that is virtual and theoretical into something practical where people get to work together is a big part of what the kitty view does and we need to make sure that our goals as Adrienne mentioned before it's about the conferences but it's also about the the sprints of different meetings that we can have well whichever is the right ways that we have to coordinate among ourselves in any case on the last few years we're talking about the make a living positions about how do we do this we're moving into a mindset of okay we're now spending that money we need to make sure that we're delivering there not to put pressure into all of you working on that on there but actually it's on all of us right like we need to make sure that these positions deliver as well as not us being contracted but us as like whoever benefits from these positions so because all of them I think that we all agree bring important things to the community but nobody can change the world on our own we need to work together so for us making sure that the make a living positions are successful is very important because it's what it's going to bring we believe the story of the next few years of the kitty view in general one trend that we've also been seeing and it's not entirely unrelated to what I was just talking about is getting our products closer to our users we've been very distant to our end users a little bit in this theoretical virtual kind of product that maybe we've been and while that works and it's super fun well it makes us a little bit less relevant finding ways to like touch on what everybody cares about and being able to act on this is something important now you could think this is something more related to what the kitty community does and the different developers but our experience or my experience is that as a kitty we have a lot of possibilities to have there and if there's ways we can do that we will continue doing so like we are already doing now I think that you all agree so this have been the key topics that we see for the kitty view for this year obviously everything is discussable be it at the HM, be it on the health hallways, on the chat rooms online or we're going to have an office hour during the buff days where we can talk about whatever you want to talk about we don't really have a list of topics so feel free to join like I said earlier also if you're thinking about either joining the KTV or joining the KTV board you can reach out to us and we can talk about it like you're welcome to do either of those things it's all fine and fun I would say it's a good thing to do so that's what we wanted to talk about I think that we have some time for questions if any of you have them I just have two questions so I was thinking do you think that decisions for make a living at KDE could also provide some kind of reporting on their activities for the last year a lot of us actually don't get to see much of that work and I'm kind of disconnected as I would love to know and then the other one is that I don't know if you would consider finding help there too but when it comes to design integration I feel like sometimes like some of us designers just have a big bridge to cross when it comes to realizing some of the work we want to do and unless we go and rally some developers that might like what we're doing we have most of the times no chance of making a certain change happen in the system and so I just wanted to suggest that and ask that question to find some kind of UI integrator or something like that maybe it's easier if you do the questions one by one on the reporting something we've told our contractors always maybe it's more relevant for the make a living but in general it's for all of them is that communicating is important and even part of the job for the make a living like you will see them like blogging or sending emails on the different mailing lists I mean that is how the kind of tools that we have available we can always do more of that well note that we also need to pay for it so that's how but I think that we're not doing terribly bad that way we'll be seeing more of that there's two of them that are like not even one year in or actually none of the make a living are one year in to the position so we also need some kind of flexibility there more communication is would always be better but yeah it's an evolving topic this is something that we're pretty new to at KDE EV so anything that you have to share regarding what's working and what's not working I think is very helpful the message that more communication would be desirable is a good message I think we don't want to make HR policy here at this meeting but it's definitely a helpful thing for you to tell us and I think that we can be thinking about that going forward so thank you looks like we're getting an answer out of the audience too ooh you turn it on no to sort of answer the first question on my behalf what I've been doing or trying to do is do like a monthly blog post of these are things I've done this month as part of being KDE software platform guy I have been slacking off a bit for the last two months which is not great and I should get back to that habit but that's at least my ideal of working and communicating and I also would like to sort of take the reverse and turn it into a question how could we or how should we establish some sort of communication channel from the community to us contractors what the community would like us to see doing so I think that and that's what we've been suggesting to all of the contractors in the past you get sponsored but you get sponsored to work within the community so in the community we have plenty of channels for us to communicate and just using these in the most efficient manner should be enough like if that's not enough we have like bigger problems right you're here everybody go talk to Niko and tell him what you want from him that's what he's saying right now in general like extrapolating a lot about for example how each of the contractors communicates is delicate right because the different positions will also be very specific about what they do like for example on Niko's case like there's like smaller tasks that you can be talking about but like no super flashy things on the case of Ingo he has been seeing bigger things and he has been blogging about the features when they get more done and I don't think that one or the other is a better approach I think that it's like on a case by case basis doing it will make more sense when we start seeing like how Natalie does that like I'm sure that they'll find the correct way like deciding that they all should be doing this thing is also like the wrong approach on the second question from Andy was can we take that one go for it okay so I think if I can paraphrase your second question can Katie hire a designer or something like that is that good like purely from an artistic perspective right but I mean somebody who can like write code and integrate yeah somebody they can order around so it's an interesting idea one thing I'll say from an HR perspective is that we don't currently have any positions open right now this is by design we wanted to fill all these positions and then reassess our financial situation what we really really don't want to do is hire a bunch of people they do a bunch of great work we run out of money and then we need to fire a bunch of people so that would be terribly undesirable right now we're in a situation where we're deliberately spending more money than we take in so that we can reduce our reserves but those reserves don't last forever and we're not trying to spend our reserves down to zero so before we can open any new positions we need to increase fundraising efforts so that our burn rate doesn't go down as much as it is right now again I would like to emphasize this is intentional and deliberate we're not the organization is not going bankrupt we were deliberately trying to spend down our reserves by hiring for these positions now having done so we need to make sure that there's a sustainable funding source so that we can keep the people who we have hired continuing to be hired so we don't have to get rid of them and also expand anything in the future I would say if you have any interest in future positions please help out on the fundraising side this is super important we now have really great tools that make fundraising easy so at the moment I think the design topic and the interface between design and code is probably going to have to remain a community topic a VDG topic at the moment you know that you mentioned this one of the topics when we were talking about Canadian life was how we're going to manage the different like the separate fundraising this was something that we never really wanted to do because it's work and over the last year what's happening over the last year I could work on some tooling for our work is treasury work so at the moment we have the opportunity to do better work there so maybe if you're creative we can find what we can do together we're getting one last comment it is a comment sorry continue from what Nicolas said the promo team we write a log every two weeks that is not merely for Nate but we can make it public so everybody can see what we have done during those two weeks we can make it publicly visible because it goes out on social media and stuff like that but if people want to see that we can make it public too so that others can see exactly what we have done every two weeks and that wraps up thank you for coming now working group reports we will do working group reports in four minutes and if you have more questions that we couldn't answer now you know where to put us bullet points and we're not hiding either just grab us hi everyone so yeah very quickly for those that do not know of us the community working group takes care of the community actually so in the end what they're asked to do is to enable discussions to flow in a healthy way within the community to maintain good relationships between our people and whenever we're asked to we step in to resolve some conflicts of situations we are three members that are currently active and we're all here today and during the academy it's David at Musim at the Bets over there and myself I joined last year so I've been around for one year and we had two members stepping down since last year Bavisha and Valerie who didn't have the time to help more so very quickly to go over our work over the past year what we tried to do and with the help of CISA we achieved is to create a private space a private team on invent.kd.org as you understand lots of times the work of the CISA it's a bit on the edge of it should it be public or not sometimes we have to handle more integration so it's good to have a private space where we can talk between us and at times also with the board in order to coordinate and how we act on this and of course it's a way for us now to document all the work we've been doing previous complaints see how we handle those in the past or as new members come in they can go back and see we think it's an important next step for us in terms of the EV we realize that the website on the web page is kind of outdated let's say it's been around for a lot of time now the working group has been around for I think 15 years now and lots of information there is not up to date so we already started working on updating it but we need to continue on that and make sure it represents where we're currently standing and where we want to go in terms of actual let's say requests that came in since our last AGM we had a total of 4 requests these were about 3 orders specifically and actually 3 of those requests were about one specific individual as you see one was misunderstanding the other two we had to step up and try to take action just so you know what kind of types of requests we usually get you'll see that a sample from the last year at least it's a small one but it's I think indicative enough one of them was about how we requested the process so we had a person let's say doing some things not in the standardised way merging some code and then we had complaints about the process that was happening and the other was about improper behaviour or people reacting to things in let's say not good ways now in terms of the lessons we learned you can understand that many times we have to step in and discuss things that are not always right or wrong or yes or no very clear answers so the important thing here is for us 3 different individuals to try and discuss and you know try to establish an action plan when we have a complaint and set the expectations of what we expect from the person in question all the people involved so it's the let's say the hard work we need to put in and discuss about a specific case to try and understand take as much information in from the people that are involved try to make for us to have a good understanding of what is happening and then we need to reach a consensus or we will try to reach a consensus on how we act because we want to give clear guidelines on what the next step is to the people involved and not having them confused or not giving clear answers which usually doesn't help but when you're talking about things in the open when you're doing this kind of work it can easily go either way and one lesson we learned that we should probably try to be responding faster to people making a request both once they do the complaint so yeah you had a complaint it's good for you to know that we are on it so that's the first step and then once we have taken action it's good to go back and follow up to that person to say hey this is what we do if it's still a problem we can definitely improve another thing we learned I already mentioned it earlier it's about documenting our discussion this is something we are doing now it helps a lot going back having a history of how we what the complaints were that came in what was the discussion, how did it evolve and what was then the result and so we can use that in order to be faster maybe in responding to future requests and another thing that we saw that will probably be useful is that sometimes you have this like repeating let's say problems so it would be good for us from time to time when we are talking to people to have a place the way you have your contact where you point people at it would be good to have let's say some type of wiki or page where we can point people to to see what's been asked for them and how they were expected to act in specific situations of course as we all use our three persons where we could use all the help we can get in order to achieve what I already mentioned try to be faster and when looking into new requests sometimes these things take time you need to take a step back discuss take action talk to people gather data information make sure everybody understands what the position then discuss it internally then go out and give some you know guidelines on what should happen maybe sometimes you need to coordinate with the board you need to coordinate so it takes time and if we have more persons maybe we can push that forward faster and then we want to try if we have the help of course to expand our role a bit more and be more proactive because usually the way we act is when we're being called upon when there's a complaint when there's a request so we need we're more reactive we need to have more people and more resources we can be more proactive to quickly conclude our key goals for the coming year is that we want to start having regular meetings which we haven't had so far we can use this as working sessions as well to start talking about how we want to evolve if no request comes up for a big amount of time that's a good thing I guess for our community hopefully we'll be learning about them but at the same time it's good for us to stay connected and have something to come on place where we can work and progress as I mentioned already the website and the wiki that we want to maybe start working on it and then a bit more about our own identity as a community working group from my time at least in the board as well I know that sometimes the community has a specific perspective on how community working groups should act and there are things I think we can improve on top of that we now have Joseph who is working more on community topics we think we can work together with him much more we are already trying to be at places like the forum so we can act as moderators as well we have direct access to these kind of discussions and then in general try to do more community building work unless let's say policing and supporting people when there are conflicts as I mentioned all three of us are here you can talk to us but also if you prefer to do it online you can email us at this address thank you thank you so much and with that we are moving on to the money part of it with the financial working group there you go thank you the job is to help the treasurer with money related issues and the whole board in general with money related things and help the community come up with a budget and stick to your budget if people on the financial working group are ICA, MATA and SIL I am none of those people unfilling in because unfortunately it couldn't make it so messages people overview of this year we've finally outspent our income we've had a lot of income we've got a lot of reserves the board were talking about it a lot and we deliberately outspent our income and it's important to stress that this is partly because pandemic is over COVID doesn't exist anymore and we've made an effort to hire more people but that doesn't mean our income was bad in 2022 in fact it increased from all our previous years it would be exciting to go out later but it's increasing more than just inflation but as mentioned deliberately it increased expenses mostly personnel and planning is spot on we were within 1.5% of the planned budget so graphs general trend upwards 2018 was an exception where magic stuff happened where we got loads of money but we're still improving none of us and expenses going up because we've got money to spend so this year we outspent our income which is what we wanted to do graphs 2023 the budget plan made staying on course we're still trying to burn through some of those reserves responsibly not just spending it all on biscuits for your board and but because we're outspending our income we do need to make sure fundraising coming up next is staying on top to keep up with our new expenses that we want to keep on doing forever and ever so how's 2023 looking on track we've got three new patrons yay and we're pleased with the academies' partnerships and everything's going well there's also a new tool written by Icahine which is going to allow a financial working group to gain feedback on what's happening with the money rather than doing a long manual work once a year so that hopefully would mean more time for doing more productive tasks and here's some screenshots of it with graphs and it's going to be amazing and we're continuing with the money with the fundraising working group okay so after the finisher is a finisher we get the money so what we are doing is like we're trying to identify fundraising opportunities that's unfortunately from the execution of the fundraising campaigns like the organization try to coordinate the promo team to press the materials like at the end of the year campaign and we are actually working on the infrastructure for the donations we'll come back to that later so what we are so it's me Lace, Lace, Nate and Olivia who is helping so this year we actually did the current fundraising campaign which was like the first not box based campaign we did and it was like a huge success and here's the numbers that it was like a lot of people donated we got some monthly and annual recommendations which is even better and one donations we also did like an annual year campaign which was also like a huge success like we the goal was 20,000 and we got more which is always good also like a lot of stats so I think there's documents online so now so if you want someone to read the stats we can do that later I think for the near future it's a fairly moving away from CVCM we are not really happy with CVCM we talked about moving from CVCM since yes and now we are doing it and we already like all the new donation campaigns based on the low box and we are hoping to shut down the CVCM instance soon yeah and big thanks to the promo team who was really helpful with the fundraising campaigns all we are moving on to the non-money parts of it to the legal pieces so this is the report of the KDE Frigid Working Group and also a bit of the foundation our members in the foundation of KDE are Albert and Olaf Albert has a talk in the other room the other members are me, Chris, Iker, Federeck, Martin and the newest member Victoria since last year's Academy actually so what we did, we usually meet monthly and discuss any topics that come up in the foundation to support our members and we also have a chat channel on Matrix to discuss any issues that pop up any instant action or discussion like our opinions on matters the foundation itself had one virtual meeting because nobody wants to travel to Finland or Norway all the time the cute company has a new member Joar Pekaniemi who is replacing Lars Knoll who was also the chairman in the past and the foundation thanks him for all the work he has done there so our what we achieved last year was managed to get the accounting on time submitted to the authorities which was a bit stressful because the accountants actually wanted to have Albert signature like few days before they need to be sent to the authorities in Norway was a bit short on time the relations the cute company members are good as we are told also with the new member and the foundation turned 25 years old and cute is still free software so that's an achievement that we have what we need where we need your help is the cute company asked us if we could have a meeting with KDE people to discuss things about cute it was not quite clear to us what they really expected from us but if you think of something or know something who would be useful to meet with the cute company talk to us and in general be in good relations with the cute company and the cute project and work with them collaborate with them be nice to them thank you thank you so much that brings us to the last one which is about all the infrastructure we run on from the disciplines so this report is prepared by Ben but so we basically make sure that our servers continue to run and our services continue to run and so data points that are interesting are like we handle 7 terabytes of data in June month and that completely excludes what CDN and cloud were already like handles for us and we average 170 web request per second and this excludes the kitlab because kitlab is like other much larger thing so these are our members and also like we have support from other community members and also we have support from other community members and also we have support on very specific services for example Sanctuary or things like that I think this is like a recurring item because every time the essentially server run always running on the server runs out of support and we need to rebuild our servers and also we took over the discuss instance of the discourse instance of the kitlab and like due to website work done by our web team we also eliminated capacity of framework like it was it was powering our old websites also the open street map and all that requirements we were running out of the space or on our CDN server so we also had to rebuild and expand it and also the LXR which we use for hosting the various like hosting the allowing the developers to source our code so that also we moved to different server also some of the improvements is like the big blue button what was previously happening was that it was on the smaller server so if you wanted to host some sprint or if you wanted to host some meeting you had to call like you had to create a sysad mitigate you had to ask us like please increase it for this period and we had to then increase it and then we had to decrease it and so it was a bit of manual work that we had to do but now we put it in like already much larger server so like we don't need to do that scale up and down essentially so that also like one of the things is like we moved away from the digital ocean to Hasner because digital ocean essentially had cloud based CPU so it was not so performing and now this is like a native hardware and I mentioned the artis.org which is like this course forum that we took over hosting also GitLab continues to be like migration continues to be like going ahead so we have quite bit of updates happening to GitLab regularly they basically release every month more or less and we also did some of the CI work so basically when we started GitLab migration our plan was to basically do it in three different steps so one was like hosting the code then having the normal CI and then binary factory so we also have like proof of concept for this binary factory there is a buff about it that we have to also we have to discuss it and we have to finalize it also Docker Hub basically made it commercial so we had to push the Docker registry for ourselves because like if you had to go through like Docker Hub then you can only pull in Docker images for some limited time and that was not really scalable for our CI okay so where we need help there are like our egos so GitLab signing service I think as I mentioned there is a buff about it we also have to migrate our tasks because that is the only thing that is keeping the fabricator alive so we have to like we have to create archive and we also have to create our egos tasks to GitLab and yeah so we have to build our mail system which includes the migration to mailman 3 and retirement of my query and query identity I think like right now there is only one or two services which are actually using query identity and rest of the things have been migrated to GitLab as a like authentication server so yeah any questions we have plenty of time for questions but it can be questions for sysadmin it can be questions to any of the other working groups are there questions well if you all get on stage can you touch your nose yep do we have any sensible questions from the audience for any of our working groups since you are not excited about it may I ask Carl to remind me at least what's the percentage that Donorbox keeps when we get a donation I think it was 5% but I'm not completely sure I can check again other questions from the audience or you've been so tremendously informative and clear that we have no questions and that we can go for coffee early so see you later welcome everybody to the last session this day in the afternoon after the coffee break now everybody is refreshed and we can listen to a progress report about the KDE framework 6 by Andreas, Nicolas and Volker Alexander, sorry hello again I'm saying again because we've been here before not here here but at Academy first time being in 2021 where I talked about what's cooking for KDE framework 6 where it's sort of elaborated and presented on various ideas and goals we had in mind for framework 6 at the same Academy Kevin Ottens was presenting KF6 the architecture overview where he had a sort of more architectural approach and description in mind and then in the next year I was at Qt Defcon which is a developer conference more focused on Qt where I also talked about our way to Qt 6 and a bit about KF6 and then of course there was Academy again so first of all last year Volker talked about the plans and progress and then Volker talked about what we were back then and then Alex and I had a longer talk about some more in-depth topics and how to port your apps to make use of KF6 and then later that year I gave a keynote at Qt Con Brazil and where I also talked about our journey to Qt 6 and beyond so you're starting to see a pattern where there's one more person on stage but you can be assured next year there won't be four people on stage because this is going to be the last Academy we're going to talk about KF6 hopefully so how did it all start? The first time we started talking about KF6 was also at Academy four years ago which is a long time by now in Milan KF6 wasn't out back then but it was on the horizon so it was ample time to start thinking about that and that's what we did in a boss session and a couple of weeks later some of us met in Berlin for an in-person sprint back then when people still did in-person sprint and talked about it and we discussed ideas, we set out design goals we came up with a huge project and we started working on it and fabricated it at some point had close to 500 tasks on it and we of course started working towards that and then two years later in 2021 we did another sprint this time online for reasons which was used to sort of refine the work board and we had a lot of things where we once a week or once every two weeks got together for an hour and discussed various topics that came up during the week which proved very helpful sometimes so what were the goals that we had in mind for KF6? the obvious one is we want to use Qt6 but sort of the baseline requirement we're talking here the most important goal we had in mind was porting should be as easy as possible that was a goal for Qt in the Qt6 transition and we wanted to have the same so no complicated, needless breakage breakage should be as obvious as possible and as obvious to fix as possible then of course we want to have better APIs that are easier to miss use are clearer, have a clear purpose are not duplicated are not better served with Qt or the C++ standard library for example then a lot of our code is based on Qt widgets but we're getting increasingly more Qt quick and QML code but some of the frameworks things we have are still quite a bit entangled with Qt widgets sometimes in trivial sometimes in not trivial ways and we want to have a better separation between the UI agnostic part and the actual UI part so that it's easier to use all of the frameworks things from for example QML then another goal we had in mind was reducing the dependencies some of our framework have because dependencies are not nice and dragging things along that you don't need is not nice and especially for some things like KIO this really wasn't for third party developers to actually make use of our nice libraries and we also wanted to have a better separation between an interface to something and its implementation for example for K Wallet we have an API that the library uses to talk to the password system then we have the implementation of the actual password system and we wanted to have those a bit separated so that it's easier to use for example plug into an operating system's native password system like we have on Windows for example and that ties into the next goal which is better cross platform support for platforms like Windows, Android, Mac OS or just non plasma Linux systems so I've mentioned that I gave a talk at Qtron Brazil last year and they gave me a nice slide template that says as a placeholder this is a quote words full of wisdom that someone important said and can make the reader get inspired and I thought it was a nice idea and I really wanted to have a nice quote for it but all I could think about was something we kept saying at the last Plasma Sprint in Valencia which is are we there yet so are we there yet for framework 6 sort of when I was preparing my talk at QtDevCon I came up with a handy website that automatically tracks how many of our projects are ported to Qt6 by looking at the CI configuration and I called it iskadeusingqt6.org so if you open that right now it will tell you that 380 out of 528 projects built against Qt6 right now which is most of them for some definition of most but don't read this as a progress indicator we're probably not gonna ever get 200% there and that's fine and it's most importantly not a blocker for an actual KF6 release then earlier this year Plasma and Frameworks started relying exclusively on KF6 and Qt6 previously development has been going on in a way where we built against both at the same time but at some point we said we're going all in on 6 now 5 maintenance continues in a separate branch and that's the situation where we are with Frameworks and Plasma right now and it's working out quite well the Plasma 6 session is already quite usable I'm using it right now on this laptop to give you this presentation but it just looks like Plasma so not much to see there we still have a few items on the workboard left the Done column has about 260 tasks out of 400 something but again don't read this as a sort of progress indicator because some of them are more organizational than actionable and some of them are more optional or wish list but there are still a few challenges for us ahead before we can think about releasing a6.0 and Falka is going to tell you all about them right so in terms of things that definitely have to happen before the release I think we have one major thing to sort out and that is the whole coexistence story right so we don't only have the luxury situation of Qt6 apps in the Plasma 6 session we also have Qt5 apps in the Plasma 6 session Qt6 apps in a Plasma 5 session we have Qt6 apps in non-plasma sessions right in all possible combinations there and there is various things that could interfere with that working nicely and the probably most common one is fight system collision so just things installing in the same location and that doesn't work that's usually what is referred to as co-installability but this is only part of the bigger coexistence story it's the one we run into during build time which makes this easy to deal with the other ones tend to be a bit more nasty example of this is for example D-Bus service names they have to be unique so we can't have two processes claiming the same name at the same time we have services that can't just arbitrarily coexist something like the wallet system whether an application can do this or whether it's using Qt5 or Qt6 or running in a matching session that should just work so there's things that we can't just have in parallel another problem area is the whole plugin-based platform integration so the most visible part of this is typically the style and right now this problem doesn't even seem that big because the Plasma 5 and the Plasma 6 are pretty much identical from a visual point of view but as over time we expect the Plasma 6 style to evolve and change and Qt5 running in the Plasma 6 session should still use the new style so we need that also in a compatible way for Qt5 same thing with the file system the file dialogue integration you want to use the proper platform file dialogue and not some fallback and all the stuff that pulls in similar problem with the generic application plugins things like the console part or the ocular part so if you have a Qt5 based Dolphin and a Qt6 based Cate both of them should still have their embedded console terminal it shouldn't depend on in which session you're running in which of those will get the terminal and then there's some really dark corners when we get to environment where we are both set by the session and so none of those are usually hard to solve it's just many of them and we need to go through all of them and look case by case how to address them typically we have like three different standard approaches that we can apply and in some rare scenarios we might need some special solutions the probably most common solution is the library and the library is versioning basically what applies to all the libraries you just increment a version number and then things can happily live side by side that's easy on the library side it requires adjustments to all consumers basically for libraries CMAC checks that part at one time this is a bit more difficult to spot infrastructure around and if I'm still using the wrong keyboard's name for example I might not notice so some of this only shows up as we move to a pure 6th session another standard approach is exclusive exclusive just having one of the things right so like in the wallet scenario in some cases we need to build options there to disable stuff in 5 because we weren't prepared for this and we might need adjustments in packaging but it's generally completely transparent for the consumer side however that also kind of implies that we need to stay compatible across major version changes so a very prominent example of that part is also the icon themes they are very big so we don't want to duplicate them and there we have the XTG icon spec which defines the compatibility anyway so there is no risk that we can't break the compatibility anyway that is an example for using this approach a much less commonly used approach is multi-builds so the same code base with a single run of the build system produces libraries for both Qt versions this is something we haven't historically used in KDE a prominent example is Poplar that produces Qt bindings for both versions but this has recently been also proposed by David and Harald for example for things like the freestyle that might also be an interesting option it however requires that the 5 and 6 code base kind of stay aligned and it limits our ability to eventually retire the Qt 5 support if we do this too early then we basically end up in the same problem again we can't co-install this but then we get to the dark corners when we think about the API then that is usually like C++ QML and maybe CMake however there is more that is de facto API like the executables we install the interface and their names or the environment variables or the all the stuff we put in tbuzz and since we don't really think about this as API but people use it as API there is some ambiguity on what is guaranteed there and then people get creative an interesting example we found there is how the KDE session version variable was interpreted some users just concatenate that on debug names or executable names and then basically assume that the same thing will exist with the version 6 as well others just error out I mean this happens in niche applications like Chromium or LibreOffice things like the XTG Utils which features like KDE files and URLs on the entire Linux platform also kind of important or libraries we use like Jet Keychain so we need to identify all of those things also in the external consumers and then see if we can make the new API kind of match their expectation or if we need to fix stuff upstream there now so it's ready in time when we get to releasing KF6 and yeah so as I said this is probably the only really hard blocker there is however many more things that we still would like to see or even things that are somewhat critical to get done like ECM isn't able yet to build APK so that's a pretty hard problem on Android and there's things we would like to get done in Qo to be able to replace the aging HTTP implementation for example but worst case we'd survive with all of that done for the initial release if you're interested in discussing those details and working on those details we have a buff on this Tuesday at four o'clock we also have another buff on Monday about how you port your applications and that is what Alex is going to talk to you about now yeah thanks Volker and first of all we're going to talk about the KDE source build setup for porting your apps and like always this utility is your friend and helper when it comes to building KDE software and it takes care of a few things for the KF6 builds for example choosing the correct scenic arguments because in some projects you explicitly need to turn on the KF6 or Qt6 builds and sometimes you need to disable some deprecated APIs and it also chooses the correct branches for you and because projects for example Dolphin have a separate KF6 branch but we'll come to that in a moment it can also compile third party packages for example dependencies that are needed to build another KDE framework for the KDE software in general and the KF6 builds are configured by the global branch group setting so it's the global section of your KDE source build config file and you just need to specify the branch group to the value KF6-Qt6 and then it automatically knows what the correct scenic arguments are and what branch it should check out but it is recommended to have a separate prefix for your KF5 one because there are still some remaining constulability issues and apps for example Dolphin and don't simply need to be fully co-installable and that is configured using the KDE-DEA variable which is like the install directory and also the source directory and build directory you could maybe reuse the source directory but that would make the rebuilding process a little bit longer and you can do this of course in your main config file or you can have a separate or custom config file and you can pass that in using the RC file and argument to KDE source build but since we are lazy and deficient it is best to have a simple alias for it and you have a snippet there that just loads it from the KDE6 folder in your home directory and that works in both batch and fish and maybe these age too but I haven't tried that so now we can get started on our apps and the first step as Nico and I told you last year at academy is to disable deprecated API but you should really make sure that you are using latest frameworks because lots of API that was removed in KF6 had back ported to KF5 and maybe even the alternative API or some porting aid to the KF5 branch and those are only contained in the later versions and then you of course need to adjust your build system for non-library code it is recommended to just use the version of CMake targets and that is what we do in Plasma and for other cases like the libraries or frameworks we need to use the Qt major version but in KF6 master if you want to call it we are Qt6 only so at least there we don't have to do it but yeah you can just inject the major version using this known CMake syntax for injecting the strings and you can use it for both the find module calls and when using KF5 targets because we intentionally decided not to have versionless KDE frameworks targets but you should make sure the Qt version option is included and that is a module from extra CMake modules or ECM and it is usually included by the KDE installers already but in case you are getting like read error messages keep that in mind and you can see a really simple snippet on where to use that major version then but since doing all of this manual work is tedious Lohor has shared his script for adjusting most of the build system stuff and you can check it out in the KDE devscripts repository and there is a KF6 folder which contains this build system script and there are also some other scripts for like porting KDE APIs and such but not all changes can be caught by deprecation macros for example changes to virtual methods because they are binary incompatible or classes being renamed or moved and that is also what was done for the KCMs because we had the KC module class in K config widgets and also in K declarative some other KCM related classes and all of them were moved to KCM utils and had a new name and those need to be adjusted with a pre-processor macro and in some code you can see that there is a Qt version check used but if we only want to differentiate between the 5 and 6 version we can use the major version directly and don't need to utilize the Qt version check but QMA runtime issues are very critical to port especially when they are incompatible with KF5 and it is possible to configure those files using CMake and it has a configure file method that allows you to dynamically inject certain strings and you can then install the generated QML files or include them in your QRC and that is what's done but depending on the complexity it is a big hassle like if only a few properties got renamed it is of course doable often import has changed but for more complex refactorings it is possible and even desirable to have a separate KF6 branch and that leads to a cleaner codebase because you need less compatibility code and you can utilize new features and APIs if you wish but there is also the risk of diversions and you have a bit more maintenance effort and so it also depends on how actively developed your app is if there are still many features being developed or not and when actually doing the porting since we now know how to proceed and the plugin system is one of the major challenges and in KF6 both the runtime and build time JSON conversion was removed that includes the K plugin metadata from desktop file method and also the K Cordance desktop to JSON CMake function but the desktop to JSON CLI tool which you should use for the manual conversion is still kept and if you're wondering why this doesn't have a major version it was removed in KF6 so we don't have to worry about co-installability and you can use it just from the . And when you encounter deprecation warnings regarding the plugin system and the documentation of the API provider or the warnings you get on the console are quite important because they tell you what plugin namespace should be used and the version that the change is compatible with and after the conversion you just need to adjust your macros and see two of them those are the changes. Also the way of determining the plugin ID has changed because previously the ID was often specified in the metadata and in KF6 we only used the base name for it and you also get a warning on the console in case the ID you've specified which is now internally ignored but that ID is different from the base name and you can just remove it in that case because that is compatible with KF5 and the porting of KService type trader is documented and was already discussed last year and since then there are only a few breaking changes for like system providers like the keyword parameter of the K plug and factory create method was removed but that would only affect you in case you have a custom macro for it or you have a custom factory but there are some improvements you can utilize for example the client plugin by ID method has better performance because it can now directly load the plugin from disk and doesn't need to query all the available plugins and that also allows better compatibility in case you want to utilize static plugins and there is now a parameter for allowing plugins with empty metadata and what is most relevant for developers is the debug operator so you can log and here you can see an example on how that would look like the plugin ID and the file name and you can of course benefit from the optimized internals of K plugin factory and K plugin metadata and we would also be very happy to have you at the buffs and feel free to ask any questions Are there any questions? I mean for the ocular case I don't think we have a nice solution at this point so at least part of the idea how we try to minimize the problem is to get as much as possible done for the first release or in a relatively short period of time because that doesn't solve the problem in a theoretical way but practically the problem is much less likely to occur and maybe we get away with some of those corner cases then but yeah in theory you would need to co-install ocular in both versions right and that I'm not sure if we really want to go there well not necessarily the full ocular just ocular part and everything that this uses like some internal ocular but not necessarily two ocular executables sure but I mean this is like 80 or 90% of the code right it's it's maintaining the pipe thing for some more time ideally we wouldn't want to do any of that right in an ideal world I mean in an ideal world we just get everything ported and ignore that problem I hope we can do much of that it's probably to some degree also a case by case thing ocular is usually widely used as a part but it's I think it's manageable to port most or all of the consumers then there's stuff like kio extras which we will probably have to ship both flavors off for the foreseeable future because kio is used everywhere and without kio extras kio is not that useful yeah I mean that would break the pipe dialogues right I mean as long as we have major kio consumers out there like say Twitter we can't break their file right now okay so maybe I'm asking a stupid question or I've misunderstood something but can't you just drop the .qrc files since the new Qt6 CMake API will basically make a qmail module and miss the file the resource files under resources that's for the the qmail porting right yeah yeah the thing is that we don't use that at the moment and we have our own version of that in ECM it's just called ECM at QML module but in case one needs to inject such compatibility strings using CMake one still has to make sure that like the version of the file from the build directory which is configured is picked up and maybe one can just specify an absolute path in the Qt APIs on the Qt CMake function for that or how is it practically used okay yeah better to wait until Monday I guess perfect I have a question regarding the plug-in thing I'm sure you have already thought about that but well if you decide to drop the KDE5 plug-in for example for Ocula and replace it with a plug-in for KDE5 that internally uses the KDE6 plug-in would something like that work? the main problem there is we can't really have those major Qt versions in the same process if that would be possible then a whole lot of those problems would go away but since that doesn't work we can't put this in the same process we can't try to bridge or proxy this that will always end up with both Qt versions in the same process competing event loops and flashing symbols and it will crash you would need to put it out of process but then we are getting in a whole new different world of pain let's just port everything and be done with it that is a much easier approach actually we can have two Qt libraries in the same we made that possible for Qt5 and Qt4 but it requires that the application that opens the plug-in uses a feature called deep link so that the plug-in does its own linking and finds its own libraries instead of reusing those of the host library of course you still don't do anything with dual event loops interface that doesn't use any Qt data structures then you can do it because we had these problems that were certain types of libraries that had a plug-in used Qt so we solved that back in Qt5 days early but that is on the simple level that's only if you have an API that can be Qt interfaces so it's not that useful for KE Qt plug-in loader would usually refuse to load any plug-in that was built against a different version even if it's the same major version I mean also with stuff like namespacing and so on you could work around the symbol problem but the ocular part is like an interactive thing so this needs rendering and user interaction please let's not go there yeah so I didn't see any date in there well technically there is no date yet so the current preliminary thinking and all of this subject to change I think we basically have to go backwards from the plasma plan for Plasma 6 current state is still thinking towards end of the year so assuming we manage that for Plasma then we would need frameworks in say November which means starting with pre-releases in August or September so this is getting really close then and then maybe gear 23-12 would then fall in the same cycle and could contain the first six space so I think that is the best case scenario we certainly won't be faster than that so far I think we are still on track to make that so if we don't make that then it slips by four months or whatever the next window where we could make that but the plan is basically to follow the Plasma plan and go backwards from there okay, thank you test test test test test test test test test test test test test test test test test test test test test test test test test test 15, 16, 17. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4... All good? Can you hear me? 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4... One, two, three. One, two, three, four, five, one, two, three, four, seven, eight, nine, okay. So in the next talk Marco and Nicolo will present what Plasma 6 has in store for the future for us. Hello everybody, let's talk about Plasma 6. We'll get to the technical part of things with Marco. I don't know anything about that, but we'll talk about design and style first. So about that, the idea from the VDG that we had like some months ago was like to try to do some incremental style change, so without any big revolution in the design and the style. But whilst I was putting together the presentation, I realized that we did not do that and in fact we're still months away from release and yet we have a lot of visual changes and redesigns and such. Not all of them have landed and are ready to be used in master, but everything that I'm going to talk about today should be in Plasma 6 if I'm not too lazy. So starting off with some stuff about Plasma itself, we've got a completely redesigned overview, which yet again is currently not in master. It's in a branch, it's almost done. It looks completely different compared to what we have currently. It looks much more like to the GNOME overview with the Blur My Shell extension, which I did use as a reference whilst implementing this. In here it's just the design that's different, but it actually works completely different because it also now includes the grid view and the idea is that there are three states, so not like no overview at all and then normal overview and then the grid view and you should be able to switch from one to the next one or to the previous one instead of having two different effects. It's just a single one that has both the grid view and the overview. Whilst doing that, the touch screen and touchpad gestures changed completely as well. So now you can switch between the three states. So as an example, just by doing one swipe up with three fingers, you're going to switch to the next state which is going to be the overview. If you do it again, then you're going to switch to the grid view and if you yet again do three fingers ups, you're going to get back to normal. The opposite things happens if you go three fingers down as an example. The same applies to the touchpad, so the gestures are consistent between the touch screen and the touchpad. So it's a pretty big rework of how you interact like with your open applications and ideally this should make all the transitions a bit nicer and also if you're switching virtual desktop whilst in the overview, that looks much nicer as well. Next up, we would like to have, I'm sorry, we do have already, we have this already in master, it's ready. The settings for the panels have been completely redesigned. Now they have these little drawings to show you what would happen if you click on each setting which I think are super nice looking and should be applied to everywhere in system settings. And these however cannot be the last revision of the panel settings because we do have to change them again to change how we set the position of the panels and such because of some technical things that I think already happened but I completely lost track of them. Next up, we would like to have floating panels by default ideally. It's a discussion that we're having but to do that they need to work a bit better. So there is a merge request, again not landed yet, which re-implements a lot of the features that floating panels have. So now they do have a shadow, they didn't previously and also whenever there's a window maximized, they just defloat vertically without taking any more space as they used to do previously. They had this big margin around them. People didn't like those margins rightfully. Now they're just gone. It just flows towards the top or the bottom. So that's nice and ideally, who knows, that might make the floating panels usable by default. Next up, outlets are going to be redesigned too. Again, this is very ongoing work. So what we do have already are this couple of very pretty switches and the idea is that everything that gets applied immediately as soon as you click them, it's going to be a switch, not a checkbox, and checkboxes are going to be used for things that you click and then you also have to click apply or it's some settings instead of an application. But yeah, I do think that these switches within the outlets look very pretty. We do also have a redesigned task switcher. This comes from the Plasma Sprint just a couple of months ago. And the idea is that we couldn't quite agree on which one was the prettiest task switcher. So we took a couple of them and just smashed them together. So now we do have thumbnails for each application and we also have the icons on top of the thumbnail. So that's going to make it much easier to switch to the right one. So these are the things that are going to happen to the Plasma Shell itself. I hope that I'm not forgetting anything. I probably am, but we'll also get more stuff when we get closer to the release date. There's also some more stuff. Yes, I almost forgot about this one, sorry. So these are floating dialogues. These are, again, not in master, almost 3D and the only thing left is to decide how to expose them to the user and we also kind of have to decide if we want them at all. If we do want them, then they're going to look kind of like this, not by default, but as an option, either that can be toggled by the user or by the Plasma theme itself. So you just put some margin in the SVG file as an example and then that's going to be applied around the dialogue so that it's floating. This used together with the floating panel actually helps reduce the number of margin errors, like some likely stuff that happens when you open kickoff and it just like floats into nothing. So it's a bit better looking with this option. Again, might not be in Plasma 6, but who knows. Next up is not Plasma shell itself stuff. As an example, some things that are like stuff that can be reverted. As an example, we do have merge request ongoing for a completely redesigned icons for places and dolphin stuff. So this was started by Cameron Matt a couple, a year ago or something. Luckily he wasn't able due to stuff to finish up the work, but we now do have the draft and all the Python code that went with it and in fact it's a lot of Python code that completely automatizes the process of creating new MIME type icons and folder icons. It's a super big thing that needs some Python developer help. So if you know a bit of Python and want to contribute to KDE, this could be a good starting point. It's just some tooling that needs to be finished. Next up we do a redesigned mouse icon theme that if I recall correctly is currently in master. I think it was done by Manuel again, some thumbs up. So I didn't look much at it. It looks slightly different. This might not be the final version, but that's it. So this is the final version that will for sure be in Plasma 6 and I think it's a bit darker and a bit prettier. It's a bit incremental, but it looks good. We also have a redesigned, if that's the word used for it, sound theme which isn't on master yet, but it is done. It is a bit weird from a technological point of view how to deliver this one to the user because currently we do not have the concept of sound themes that you can just swap out. We don't have a system settings page with a lot of sound themes that you can switch between. So one idea would be just to replace all old oxygen sounds with these new ones. It would be a bit weird, but we can do that and there's a note from the audience so we are not going to do that. Again, I'm kind of outdated on the latest leaf about these ones but you can check it out. And finally, there's also the idea of having some colorful window headers. So the context about this doesn't exist yet. These pictures are edited and they're also misleading because you see a lot of different colors between windows. That's not the idea. The idea is that you have an accent color as always and that accent color is slightly applied to the window header that you're currently using. It's going to be the same accent color for all the headers but only for the active window so that it's easier to actually distinguish what you're currently using from other windows. Also, there's no merge request or patch that currently implements this. There are two different options. One, to actually paint the header bars but it uses a different color that's way stronger. It's actually the same one as the accent color so we don't want to go that route and the other one does tint the colors but it ints all of the colors of the window which is pretty cool but we didn't want to go that route by default. So this is the idea. You pick an accent color. It's going to be slightly tinted. It's probably not going to be as strong as I presented it right here is it just to see what's going on. Just something very slight to help you identify what is the currently active window. From the design point of view I think that was everything. If you have any questions I guess that's at the end we're going to do and if I forgot anything again I'm really sorry that's what I was able to come up with. Okay, so now unfortunately the pretty pictures are over and there are boring technical details and code samples. So bear with me please. Most of the rest of the talk would be about things inside the plasma shell itself and how plasma is written but it would be unjust to talk about plasma 6 without mentioning some important things that are coming in the other very big component of a plasma session which is QIN. Just three important things to mention. In master already QIN it will support HDR. So if you have the right hardware the right monitor if you run an application that natively supports HDR such as a particular game or a particular video it should work out of the box everything more colorful and beautiful. Another big feature on Wayland which is very important for feature parity with X11 the restart support. So until now on a Wayland session on any system on any desktop if you had a crash on your compositor or if you just wanted to restart your compositor then you lose the whole of your session or all of your applications. Now the applications will support the compositor going away at least the application that supports the protocol and then the compositor restarts and you should get all your application running like nothing happened. Another thing not there yet is we are planning to support which could have quite some good applications. There is a new Wayland protocol floating around about workspaces that it's basically a virtual desktop protocol on steroids which will allow us to do things like different virtual desktop pair activity or different virtual desktop pair output which also ties on the further support of tiling that we will do in Queen since apparently for tiling window manager users having different virtual desktopers screen it's quite an important thing. Now passing all the rest on what's happening in what is curated plasma sessions so your panel of your widgets and what changes are in the plasma library and in how you are writing plasmoids. The rest of the talk will be about that because there are many third-party plasmoids on the store. Whoever maintains that will have to do quite some work. The good news is that almost everything that is around the moment, everything on workspace and whatnot has already been ported but there are quite some important API changes. Let's start with that thing we are getting rid of. Data engines, that was that concept that we had at the beginning of KD4. At the beginning we wanted to write plasmoids in JavaScript that was before Qmallivn existed so we came up with a completely imperative API and the way to get data from your tasks like your notifications and whatnot from C++ source was with that API. It worked very well in a imperative war. It doesn't really work. We do have bindings from Qmall but it doesn't really match well. That job is done much better by Qmall extension so you write your QObject with properties and data models and whatnot and expose that to Qmall. Now it's the way to do that. Everything about that engines has been moved out of LiPlasma. It's in its own repoint workspace called Plasma 5 support so for the time being all the existing data engines still work but we are planning to eventually get rid of all of that. Second thing, SVGs. We will still support SVG teams as they are. You can think they're kind of getting along in the tooth. That's correct. And we eventually will have even some way to replace that. It would not be for Plasma 6.0. It's something to think about in the future but in the meantime actually some things in all the SVG-related libraries were quite useful. For instance, I heard many times that somebody wanted to have some simple SVG icons in its Android application that wanted to recolor with a color team and a cache on disk because loading SVGs on Android was not with Qt SVG was not really great. Plasma didn't wear classes to do exactly that, could be used then because all of the dependency of LiPlasma that depended from everything. Now all of that has been moved to a different framework called KSVG, basically recycling the name of an old dead one that's fine. So it's Plasma SVG, Plasma Frame SVG, still supporting the nine patches thing. Everything about Qt SVG is not exposed in the public API so in the future we could even, in a compatible way, switch the backend if the leader rises. For now, if you were using it, porting it, it's very simple. It's just pretty much just changing the import, changing the namespace, and it's okay. All the old APIs still works for simple SVG items. You are now just recommended to use this more compact form but the old form still works. A nice addition is that this framework has, compared to the Plasma version, if you had SVGs that support all the stylesheet stuff to have a rectangle with the same color of the system color background. Now this works with normal system colors. It integrates with Kirigami and the Kirigami team class. So even if you by hand replace a color, then in your SVG that depends with the new color as well, which is nice. On the C++ side there is this image set class where you define where your SVGs are so you don't have to use a Plasma team. In your application you can have any set of elements and files you want and you can ship it, for instance, in the data application of your application or together with all the QML files or whatever you want, you control it. That's basically it. The biggest headache if someone wrote a Plasma or a third-party Plasma on the store, the biggest headache is a big change in the Plasma API. Also the Plasma API still had many components from the old Imperative JavaScript API and we wanted to get rid of all of them. So you had still this magical context property which is not a good thing to have in QML called Plasmoid which was a QQI item that wrapped most, not all, kind of similar but not exactly the same API of the decentralized class of Plasma. Now you only have the uppercase Plasma so an attached property version which is directly the Plasma Applet instance so you can use basically 100% of the API or everything that is exposed in the API as properties, invocable signals and whatnot. What was that QQI item? It became a graphical element called Plasma and it was the item that now must be the root element of your Plasma. Comparing these, we used to have any random item as the root item and then put the attached properties for representation. So basically this representation will always be basically your item. So in most of the Plasmos, this actual graphical item was not shown anywhere in the scene which was kind of weird. Now you declare it like that and this will be the actual root object in the scene of your Plasma. The more semantic properties like title, like the status if it's expanded or not are still of Plasma, which is not the Applet and the purely graphical ones like the actual representation or compact differential representation are direct properties of that Plasma item that the main thing that changes. Another thing that was completely taken from the JavaScript API was the Actions API. Basically, a Plasma can just add an arbitrary number of context menu action if you right click on its icon on the panel or whatnot. It used to have this API that is very imperative so in uncompleted, then a bunch of JavaScript which created this call, created internal action and then if you wanted to bind something you even had to use an imperative API for property bindings and then to react when the user clicked that you had to declare a function with a kind of magical name so if this was previous, it was action previous and then it would be called. Now on the attached Plasma item object Contextual action is a list property so you just declare a list of everything you need and then the action you just put it declaratively with all the bindings and whatnot. This action type is actually a natural queue action because there are things that still go from QML to C++ back and forth so they're not QML actions. It's something that I would really like that was supported in QML. The fact that the QML actions are not real actions is still something I'm not completely happy so hopefully in the future something better we will have. Another thing that changes in how things are done so in Kirigami when we first designed it we had lifted pretty much one to one some concept that we were using and like in Plasma that we could not use there again because of dependencies that brought out quite some code application. Basically two big classes Plasma team and Plasma units. Plasma team for all the colors so background color, text color whatnot that come magically from the proper color team but over the years the Kirigami version got much more advanced than the Plasma version like these inheritance so I can say the Kirigami team of these items is a header so if a header has different colors everything in that item also the sub items will have colors from the header which is possible because it's an attached property that narrates the whole tree of the scene while the Plasma version was single so it was the same for the whole application it didn't really work. Also in Kirigami for some reason the application wants that in that particular part in all its children the background color is not normal but it's a particular shade of purple for reasons it works. So in Plasma 6 team will go away just use the Kirigami version units that it's for animation duration layout spacing and whatnot also that is the Kirigami version another class that was in Plasma was color scope that was implementing that color inheritance that I talked before also that goes away from Plasma 6 so yeah just use the Kirigami version a blocker that we couldn't do that in Plasma 5 was that we may have to need two different teams so like Plasmoids need colors that come from the Plasma team configuration dialogs needs colors that come from the system team by default those two are the same thing but some Plasma teams have their own color set so we have to base where we are we have to return the proper colors now this works in Plasma 6 so that's fine also yeah units use the Kirigami version icons we had a particular widget for displaying icons in Plasma that was also complicated in Kirigami didn't make any sense so it was only the Kirigami version it's kind of unfortunate that one to display icons that properly works is not really a curable base but at least we have only one implementation instead of two so most of the work in Plasma 6 was actually to make it voice model with much less color to maintain so hopefully less bad so any questions? both of my part and Nicola part so regarding the floating dialogs are you going to have a way to have a little arrow that points to the part to the little applet that's been requested actually implementing that would be quite painful to say the least so I would be inclined to say currently no ideally I would like to understand it more it's possible it's painful it's not technically possible all the dialog also one thing that I forgot to mention all the implementation of those pop-ups the dialog class was quite of a mess internally as being rewritten so that may make it a bit easier to do in the future so I must say I'm sick and tired of oxygen sounds so I'm glad that there's finally something new coming up I'm also happy that you look into the XG sound scheme spec which is something I've wanted to implement for the longest time so we should have a chat about all of that if there's anything else you need or if there's something that might be still floating around on my computer that you could find useful but yeah so very nice work on the whole sound stuff let's bring back the broken glass sound so now that we have the ability to use Kirigami themes theme stuff can you touch a little bit on the blockers to unifying the actual components that's themselves such that we could also get rid of the duplication between say the Kirigami and Plasma versions of things like placeholder message, label things like that right so there is still one problem to just use every single Kirigami components in Plasma it's still for the same problem of having to support two themes in the same process so for everything that is just colors like just label headers and things like that we now can use it without problems everything that uses Qt-Qt controls so like if a controller is inside a button then that becomes a problem because simply the import Qt-Qt controls too gives you only possible theme per process because it's how the button type is registered so we cannot have two and that is still an unsolved problem in Qt-6 unfortunately No more questions then thank you Marco okay thank you all for being here for my academy talk and I'm going to talk about K-Runner, past, present and future and that includes topics like porting new features since I started contributing and also fact and distribution so how does K-Runner affect you because most people only know it as the standalone executable which is launching pressing all space or all function 2 key but it is also a flexible framework and it powers your normal application launchers like kicker and because of that it is an essential part of the KDE desktop experience and now I come to my structure first I'm going to tell you who I am and how I ended up in KDE and then we are going to have the KDE 4 times and we look at the major improvements and features since I started contributing but since I'm in the community for three years and I can't cover all the features so it's only an optimized subset of it and then we look at K-Runner's ecosystem and specifically the plug-ins and afterwards we look at the upcoming changes and improvements in KDE framework 6 and also look at some of the things that are not yet implemented but planned or in progress so who am I and how did I end up in KDE I first tried KDE in 2019 and I immediately left it and initially played a lot around with the applets so it was very fun like installing some new applets who can use them but it was mostly just playing around but I only later discovered K-Runner and all its features and at the time the normal application launch I did not have all of the plug-ins available that K-Runner has and I then played around with some of the KDE plug-ins and installed some other ones from GitHub and eventually I wrote my own plug-ins if you want to call them using K-Runner Bridge and then I decided to re-write my plug-ins in C++ which is what some KDE for tutorial also suggested when you want to create a K-Runner plug-in but I had to re-write my own plug-ins and then I decided to re-write my own plug-ins in C++ which is what some KDE and K-Runner plug-ins but I barely knew it at the time so it was quite adventurous and most of the plug-ins are still around on GitHub for example here you have an overview of my PIN GitHub projects and you can see the JetRainz run-up plug-in which allows you to integrate the recently launched projects in the JetRainz IDEs in K-Runner and they also made a Dolphin counterpart for it which had created a plug-in for like searching and copying and pasting emojis and there's also an integration into K-Rollet and though that's maybe not super useful for most people but at the time I found it useful to have and when talking about how one started contributing to KDE everyone has like an origin story it was a feature, they ideally wanted a bug that was annoying them and in my case I had some runners that required configuration and they weren't very useful without having some settings configured there and ideally I wanted to do that right after the installation so I had the simple idea of passing in an argument when launching the Plasma Search KCM and that would then launch the runners config module and that was my first fabricator for the Plasma Search and I'm still here three years later and contributing to KDE has proven to be quite addictive and looking back at the KDE four times I have a screenshot and you can see there's of course some resemblance like the like the normal apps like console that are provided in the results Conqueror is not as in as it was back then and there are of course some differences for example in KDE four we had user interface and Karina had some API to directly integrate into that and that was called run options where each runner could produce a custom widget for a given match and the API was still around and the KDE five times even though it didn't work and I also stumbled upon that creating my first runner and also there was a help button and that is one of the features I'm going to talk about in a moment because I revived that for KDE five and back then we also had scripting support using cross which was is a KDE framework that allowed you to create scripts in Python, Ruby and JavaScript and so we had to run our plugins in all of those languages and cross would take care of making them work and one feature that I mentioned a moment ago was the help integration and in KDE four this button was coded as part of the UI so it was only specific to the Karina executable but the runner syntax class which is used by the plugins to provide information about the queries and how the description was still in KDE framework five and the plugins still used it and some third parties even adopted that maybe they just copied it from existing KDE code but I wanted more flexibility by implementing it as a plugin and I did a little magic trick there and because I've cast the parent to a runner manager object which is always guaranteed to be true so that I can access the runners and their syntaxes until you have a screenshot of the help plugin in action and on the top left corner you can see the adventures type in question mark or pressing the button with the question mark in it you'll get an overview of the available runners and their first example query for example we have this value runner here the places runner but some runners have more syntaxes and because of that you can select one of the matches and you'll get then all available syntaxes and that is what you can see on the top right corner and the daytime runner for example has certain trigger words like date and time and can optionally have a query after that and here you can see that it is put in angle brackets and when you then run one of those and the text that is in angle brackets will be selected and the other text will just be inserted into the UI so that you can easily try out those example queries and you still know what you are supposed to type in the placeholder and you can override it with just one keystroke because the text is selected and since it's a plugin it is also available in kickoff until you have an overview of how that would look and the scrollable view is also quite nice because you get all the runners and in academy 21 there was a really cool Reddit post and highlighting some of K-runner's features and hopefully you'll be able to discover those features then yourself and if anyone ever needs to do a similar post it will hopefully take them less time also K-runner supports multi-line text and this is used in the dictionary runner and also the help runner and while the description is multi-line it can also contain a cute style text markup and that is used for organizing and highlighting information and it can be activated using a simple setter and here in the screenshot you can see that the example queries like screen brightness with the placeholder screen example query are both displayed in bold and the description comes after that so that you can visually separate the example queries from their description also testing is quite important and that perfectly aligns with the KDE goal that was presented already and K-runner has gotten a lot more test coverage in the framework and also the virtual plugins and before I made an improvement with that it was quite hard to write test that actually used the plugin because you either had to load the plugin manually from the build directory because we usually want to be able to run tests uninstalled or you had to build the runner class in a static library but that is of course a bit tedious and causes annoying CMake code but luckily we have the K-runner configure test CMake function to the rescue and that takes, or that works for both D-Bus and C++ runners and we'll talk about D-Bus runners in a moment but the concept of them is that they are just a separate process and K-runner can query them using D-Bus and a specified API and to use this CMake function for D-Bus runners we first have to pass in our test target and then our executable target for the D-Bus runners and also the desktop file name and the desktop file path because for D-Bus runners the metadata is not embedded and for normal C++ plugins we don't need to pass in the desktop file because the metadata is embedded and this CMake function is accompanied by the abstract runner test header and that contains some utility methods for example init properties which checks that the runner plugin can be loaded and then loads it into the runner manager and for D-Bus runners you can also start the D-Bus runner or a D-Bus runner process for it and it also makes sure that the process starts properly and the D-Bus service is registered and since you can start processes you of course also want to stop them for example if your test ends or if you want to have a clean state during your test execution but the method you will be probably using most of the time is launch query which just takes the given query and tells the runner manager to launch it but then it rates with a given timeout for the runner manager to finish and this slides I thought it might be useful if this method actually returned the measures of the runner manager so I patched that a few days ago and that should make your testing code even simpler what is also what was also annoying quite a lot of users was the duplication in the K-runner results because for example the recent documents for the new runner often produce matches for the exact same file because one remembers your recent files and the other one your indexed files and if you have the same or if you have an indexed file recently opened it is logical that both of them will show up and also the shell runner which executes shell commands and the application runner often produce results for the exact same thing like ita firefox and application desktop file would only execute the given command but the shell runner would suggest it as a separate match but now we have deduplication based on the chrome match ID and that needs to be explicitly enabled in the metadata because not all plugins properly set an ID for their matches but all of that is documented on develop.ke.org and for most runners it is in KDE it is already implemented but there are some remaining or some smaller issues and now I'm going to talk about K-runner's ecosystem and we had the KDE goal all about the apps and because of that I thought all about the plugins would be a suitable headline for K-runner because K-runner is entirely based on the plugins and remember that I said in the KDE 4 times there was scripting support using cross but all of that was removed in the KF5 transition and all scripted runners became unsupported as a result but with Debas runners we luckily have once again a thriving ecosystem because it's really easy to create such a runner in for example Python and there's an official template for that and that's why it's called Debas runner repository and since the Debas runner API was initially created by David Edmondson there were some improvements done for example they now have life cycle methods like teardown that allow you to clean up any data or invalidate the cache when the match session finishes and K-runner is closed there are also optimizations like a specific trigger word for your runner so that the Debas runner process is not started before it is actually useful because I had a little plugin that allowed me to mount VeraCrypt volumes and that was only triggered by the word VeraCrypt and because of that optimization it no longer started a process every time I booted into the plasma session and it is also now possible to set actions only for specific matches and before K-runner would always take all of the actions that you have provided for your entire Debas runner and add them to every match and now you can specify that explicitly if you want and you can also support or you can also define your user's help in a runner syntax compatible array in the metadata and the metadata are just desktop files and all of the properties are also documented on the developer KDE.org page and because we both the K-runner ecosystem in the KDE 4-5 transition we don't want to do that again and because of that we have like to keep the KF5 runners compatible and working with KF6 but only as long as you don't get a deprecation warning about the desktop file location because we internally needed to get rid of K-Service type trainer and the new location is like in the data root here and then K-runner slash Debas plugins so it's really easy to port that and the install or the new install location is available in the eFrameworks 572 and that has even hit Dabian stable or W11 so there's really no reason not to port to it and in Plasma 5.21 there was an integration for the K-runner plugins into the KDE store edit and that supported like custom install scripts or some pre-built packages and of course there were proper warnings and recommendations given in the install dialog and for Python scripts it is relatively simple to just manually review the one or two files that are in there but because we didn't have a standardized install procedure and we had to rely on the install scripts and open a terminal window for them and because some scripts required manual intervention or adjustments and for example if there's a yes or no question inside of the install script and that had worked great and we got many more plugins on the KDE store and it made it also far more discoverable for users but it is still not great to have a terminal window open when you want to install something and the Linux nerds probably are perfectly fine with that but the average user might be a bit confused and most install scripts for DBAs runners are just based on the official template in the KDE repository which adjusts and copies a few files and the idea is that we can specify the files that need to be copied or the information of the files to be generated in the metadata format and then we can manually generate or copy those files without the need to execute any custom script and maybe that can be extended with some dependency checks or installation checks so that your runner works properly after being installed and the merge request for that has recently landed and I will use of it in the official DBAs runner template in the KDE repository and in KDE framework 6 there were quite a lot of changes and when looking at the Git history there was lots of code removed and API refactored there were quite a lot KDE 4 leftovers as I mentioned with the run options API that was entirely defunct and most consumer facing changes were already prepared with deprecation macros but not every change could be prepared with them for example the namespace was changed from Plasma to K-Runner because K-Runner is its own framework now and no longer a part of Plasma and that didn't make sense without all of the KF5 time and we have a significantly reduced dependency tree and because KF5 K-Runner depended on Plasma frameworks in its public API which was quite ugly and it now only depends on K activities internally and K-Coordination in its public API and besides that only Qt Core and in KF6 there was also the model for Mulu imported and that model is responsible for sorting the changes the runners provide and it also exposes the information about the matches to QML and with that move all of the core functionality is now in one place in the K-Runner framework and that means we are more flexible when doing code changes because we don't have to worry about the different release cycles but of course we still need to keep binary compatibility and such and Kika already uses this and that allowed us to get rid of the sorting implementation there also there was a new class created in K-Runner which is just called K-Runner action that is a very simple data class which contains like the ID text and icon or icon name and because having a Q action was overkill and caused also issues with threading and but the porting can just be done using the right entry place and you can remove the new keyword to instantiate it because it's now not a pointer but passed by reference also the refactoring of the threading was a large effort and the previous date was summed up by Kai pretty well who said currently K-Runner uses mental multi-threading for everything which I found quite funny and he was referring to the match method being called in different threads and sometimes it was called like by different threads at the same time if you have a very long or if you have a match method which takes very long and the prepared tear down init and reload configuration methods were still done in the main thread and so if you were to do any heavy lifting there you would still block the main UI and if you want to do data initialization without blocking the main thread it's a pretty big hassle and in KDE code we have often used the pattern of having a mutex so that for the first match method call you initialize the data in there and then you write it into a variable and then unlock the mutex and reuse it for the rest of the match session and clean it up but in Kf6 the every runner is moved to its own thread and the prepared tear down init and reload configuration methods are also called in the runner's thread and that allows for a safe initialization and cleanup of data and makes a code far simpler and even in frameworks it is less code because of the thread reaver jobs could and there's also the possibility for further optimizations like in some runners we query the bookmarks using a normal SQLite connection and if we type like KDE as the first three letters and we type another letter afterward we could maybe reuse the previous results instead of making a new query and that is now possible due to the match method not being called from separate threads at the same time but I have also some further plans and ideas but before coming to that I'd say that Kroner is currently in a pretty good state and we are utilizing D-Bus runners we are appropriate like in Plasma Browser integration Krin and the Care Activity Manager daemon currently means we have a bit of code duplication and we have this like D-Bus user setup from Kroner copied in a bunch of different places and ideally we would have a small C++ library for that and maybe it could be even header only and so that we can deduplicate that code internally in KDE also the thoughtings still needs improvements and because we currently use both the match type and relevance for sorting which is a bit confusing and the match type is basically an enum with magic values and the values don't make very much sense anymore because they are like helper match no match or exact match and we don't have like any specific usage of a helper match we only care for the integer value that is assigned to it and in case the match type is different and the is different between two matches the match with the higher match type is preferred and only if the match type is the same we consider the relevance of the matches and that is confusing a lot of people and it also makes it really hard to like tweak the order inside of the plugins because you can sometimes by accident rearrange the categories with the results that other runners produced and instead we want to have separate values one for sorting the categories so that the applications may be more important than the software center runner and then we keep the relevance so that runners can tweak the order of their own matches and that would also allow Carina to learn smarter and what results you prefer because it can both learn like what specific applications you most often launch but also what runners you generally you generally use and there was also or there is also a feature a longstanding feature request about making the search order configurable in the plasma search KCM and the idea is that you can specify a few favorites and all of those favorites will be presented in a fixed order when they produce matches in Carina and the order within the category is unchanged and because that would be like over configuration and on the right hand side you can already see a screenshot on how that could look visually and usually the QM L bits are the hardest for me when implementing such features but Marco helped me out with some details here and you can see that you are able to rearrange the favorites using drag and drop and finally I want to end with a big thanks to the community because all of those awesome features and contributions would not have been possible without other members and I've listed a few notable ones like David Advanson and also Natalie Claritos and Kai Uwe who was the previous maintainer of Carina and also Fujan and Nate Graham who has been very helpful in reviews and visual feedback and there's also a buff about Carina Tuesday in 9 a.m. in room 2 and we plan to discuss some Libas runner related ideas and also some visual improvements and Kai has motivated me to make a buff about it and I hope to see you there thanks Thanks Alex, are there questions? This is just a general question but over time we have suggested a few UI changes to K-Runner and some of them are implemented which is really cool but along with the work that you're doing with plugins are you also open to visual changes to do some of that or no, not for now? Well the framework itself is not specific to any visuals but if you are referring to the K-Runner executable I'm open for suggestions and then it would be really cool if you would join me at the buff on Tuesday because there will also discuss some issues or some ideas there Sounds good, thank you Do you agree that all space is the correct shortcut for opening K-Runner instead of all the other default ones? I have mapped my meta key to open K-Runner and I'm so used to that by now and every time I'm on my work windows PC I'm wondering why does the menu open up on the bottom left corner So you need to port K-Runner to Windows so you can use it on your work machine Yeah Well the framework itself would build against windows but most of the useful plugins in the executable part of Plasma Okay, thank you That concludes the session for today and tomorrow at 10 we will start again with the next talks And in case you have any questions when trying to implement your own runner feel free to ask