 So welcome to the afternoon session. I think I don't have to talk much about our presentation. David should be known by almost everyone in the community. If you have not been living below a stone somewhere, and today he will talk about how to run KDE applications and how to debug them without installing them completely. So you can have faster development processes. Right, exactly. This is actually a follow-up from the presentation from last year, where we were thinking about how to make things easier for new developers. And one of the things that came up as a prerequisite for other things like kernel integration and whatever next step we might want to take is to make it easier for developers to be able to work on something without messing up the system without installing it to the system. So let me focus on that. OK, that doesn't work. It's not what I see on this screen. Hold on. Interesting. Right. So why do we want to do that? Make install is fast, right? We have split up everything in small modules so that we can work on smaller things compared to when it used to be KDE leaps, KDE bays, big modules. So nowadays, yes, making install is quite fast. But that only applies if you are installing it to a system you have built yourself in the first place. For instance, using KDE source build or craft. It doesn't really help people who want to develop based on distro packages, because then either you need to install as root into your system, or you need to set up like all layers where you set up 10 environment variables to point to a different installation prefix on top of your system packages. That works, but it's more set up than we would want for newcomers. Also, a lot of the scripts out there or the IDs, they don't have this notion that you need to install before you can run what you have just built. All of the IDs except KDevelop, who, well, it had to because of the KDE community. But other IDs, they are built around the principle that you will build and run. It's not built, installed, and run. For mobilities, yes. That's true. But if you are doing that on the host, then whether you're using Qt Creator or Visual Studio, they don't have this install step in between. You could set it up, of course, but what we're trying to get to is to somewhere where we don't have to set up anything out of the ordinary. I've also seen people who fear messing up their working system. I've moved past that fear 20 years ago, but some people still want to install into the system. And then, of course, the other thing is if you're going to replace some of your system components with your own, you might be missing whichever adjustments the distribution has done to these components. It could be distribution-specific patches. It could be different directories. It could be anything that is not vanilla upstream. KDE stuff. So this whole idea of being able to run the tests and possibly applications without installing, it's an effort that I started a year ago. And the status is that even the CI does it nowadays for frameworks, but not for the rest of the KDE products. But for frameworks, that is actually set up. And it's the reason why not everything is as green as it used to be, because as we are going to see, there are still a few things that need work, but a lot of it works. So the way I see that is this is my architecture diagram. It couldn't be any simpler. You are working on one module, whatever that is, a library, a framework, an application, whatever, a Plasma app, that thingy. And then you want everything else to come from the working system, so usually what is in slash USR. This includes everything else, right? The MIME5s, Qt, your frameworks, Plasma applications, and all of that. That's one of the scenarios that we want to make work, and that's the goal here. One issue that's going to come up a few times is the duplication. The files that I'm working on in my module up there also exist in the system. And I don't want to pick those up. I want to pick those from my build directory. And of course, if it's the exact same copy of the same file, then I don't really care. It works, right? But whenever I'm working on a change, then I want to make sure that it's my version that's going to be picked up, not the one from the system. So we are going to see a few ways to try and, well, make sure that the stuff from the system isn't picked up. But at the same time, any dependency that I use, I want that to come from the system. I want the MIME type from the system. I want Qt from the system. I want frameworks from the system. So it's a little bit difficult to say, I want parts of this, but not all of it. And that's where it gets a bit tricky. What I didn't draw here is the situation on the CI. The CI is a lot better for this, because basically it sets up a system over there, which is exactly the dependencies of the thing that's being built and tested, and nothing else. Which means we can use all of the system there. It has exactly what we need. It doesn't have any duplication from what we are currently building, so it's the best environment possible. So ideally, when you are not a newcomer to the community, you want to make your application or your module work with this, so you can go to some extra effort. To make sure that this works, ideally we would set up something exactly like the CI, but that's more work than you would normally do. But just to put it out there, ideally that's what we would want to do. In practice, we don't go that far. We simply make it work for us, and then push and see if it works on the CI. One issue there is, OK, how do I make sure that I don't pick up anything from slash USR? One very, very brute-force idea is to say, let me get this directory completely out of what I'm testing. Obviously, that's not going to work on very high level applications that use a lot of stuff. But when I'm working on the frameworks themselves, this can be a valid approach. If I spent a few days now working on case service, it doesn't depend on that much else. I can tell it, OK, forget about slash USR. They get this completely out of exig data deals. A small technical note about that, if you don't set exig data deals, then it will point to slash USR. So basically, you want to set it to something so that it doesn't look there. So one of the things I've been trying is set it to foo, something that does not exist, meaning there is nothing to be found using exig data deals. If you do that, well, it works, right? Nothing from the system is polluting you, but you're missing a lot of stuff. And I was seeing failures due to mind types being missing. So a quick hack for that. Let's make a temporary prefix, copy the mind types into it, point there, and then at least I have my mind types. And this idea could be extended to anything else, right? If your tests fail because you need an icon theme, you could copy it over there or anything else you need. Of course, when you get to the point where you have 40 different dependencies and so on, you don't want to do that, it's going to be too much trouble. But for just a few data files, that was a quick hack that allowed me to move further. Right? So this is just an idea for testing locally and trying to reproduce the failures that you might see on CI. I'm not saying that you should set up the full CI for yourself, that's too much work. But this is the way I found to make it possible to at least reproduce some of the failures that I was seeing. OK, now let's move into a whole section about what works. How is it possible for me to run a binary from the build directory? And if that binary uses shared library, which also comes from the build directory, I want these to be picked up, not the shared libraries from the system, right? You don't need to do anything for that to happen. CMake does it for you. The way that CMake works, at least the way we set it up in extra CMake modules, is that it will compile into the binaries the path to the build directory. And when we say compiling a path, it's actually called the run path. If you use the objdm command, you can actually see the run path that has been built into the executable. And you can see that it will point to your build directory. Basically, in the case of KIO, it would be KIO slash bin where all of the lives actually go to. That's one of the changes that I made a year ago to make sure that everything goes into this bin directory. Yes, it looks a little bit like Windows. We don't really care. Everything is in there. The executables, the libraries, it's all there. And the run path actually points to that. And the CMake magic makes it so that when you actually do make install, it will patch the binary to put the correct run path into it. Basically, only points to the installation prefix and get rid of the build there from there. That's very nice magic we get from CMake. What's interesting about that is it works out of the box. You don't need to do anything. But if you do one thing, you can mess it up. If you export LDLibraryPath, this would take precedence. And then it doesn't actually work anymore. That might not even be true anymore. That was true with our path. Let me get this straight again. Run path, no, that's correct. Run path actually has less priority than LDLibraryPath. So if you set that environment variable and you make it point to your system, lib directory, or wherever you install your KD stuff, then it won't work anymore. So basically on my system, I killed LDLibraryPath completely. And it's usually not needed, because any well-behaved application will have a run path pointing to its installation prefix. It is a common practice on CMake. CMake actually has a few settings for run path. And we just enable the setting that says, please point to the build here and update it at install time. It is a CMake feature. We just enable it. Is it even better, right? Yeah. I'm surprised I thought you had to enable it, but that might have changed over the versions or something. Yeah, it's not a KD-specific hack. It's just exactly what we need for this. OK, so share libraries, they are found, no problem. The next problem is, how do you find helper binaries? If you are running one test and it has to start some other binary, what we do is that in extra CMake module, we set things up so that everything ends up in the bin directory. That's all executables, all libraries, and also all plugins which are built by this module. We put everything together. But we still have to find it there. So how do we make it find an executable there? You might have to adjust the code this time to actually look up in the directory of the application. That's trivial. You can use QCO application, application DRPath, which is the directory from which the executable is running. And you can use that as the first directory you look into. And if you don't find anything there, then you can use whatever you would do normally to find your executable. OK, that's not rocket science, but it's still the simple recipe I found to make this work. Yeah, at runtime, it might do one more lookup than it did. It's really not that big of a problem. And I didn't say much about plugins. So we want them in the bin directory as well so that we know where to look them up. And this function from Kcoradens actually installs the plugins in the bin directory with the same directory structure as you would have in the final system. So if it's a KIO slave, then it would go into slash KF5 slash KIO as a sub-dir of the plugin path. So that's what will happen in the build there. And the ECM test macros will actually set up the Qt plugin path so that they are being found there. So this also works out of the box from ECM. Data files is also something we had a problem with. We install files into the system, and then we need them when running. There is an easy solution for that from Qt. It's the resource system where we can simply compile these files into the executable. That's easy enough. We do that, for instance, with XML GUI. If you compile them into a Qt resource, which can be part of your target, could be a library or a binary, then it will be found. I even wrote a Perl script to automate the porting to that. You can do the same with any other data file that you might need, except if it needs to be translated, then it becomes a bit of a problem unless you pull the translations into it regularly. But we don't have really something out of the box for this. OK, I'll move on. This was the easy part. Now, an interesting case is Kirigami. This is actually the one slide where I didn't implement that. Other people did, so thank you to the Kirigami people. The problem they had is they are writing a bunch of QML files, and then they have unit tests to actually load these components and check whatever. So if you want that to work without installing them, they still need to have the proper directory structure, like Org, KDE, Kirigami 2, whatever, because that's the way that it should look like. The first thing we tried was to set this up as is in the source directory, for some reason that didn't work. So instead, what we do is, at CMake time, we copy all of that into the build directory with the right structure. I think it makes it easier to have some control about the way your source is like, look like, and then you can decide how they should look into the build directory. And then we simply need to point to that directory from the tests. So that's done with a bunch of very awkward CMake commands if you've never seen those before, but it's actually quite simple. We say, here is a new target. I'll call it copy. It could be anything. Target as in a make target. You could say make copy, and it would actually do this. It's a make target. Then we say, OK, what happens when we want to actually implement that? What happens when I do make copy? Then it has to, we use one of the CMake sub commands. In this case, copy directory. Can do the same with files. To say, copy this source directory into the binary, so the build directory with that layout over there. And then we say, this target, I don't want the developer to have to take to type make copy, right? Instead of that, I will say one of my existing targets, like Kirigami plugin or whatever, depends on copy. So whenever this has to be built, this copying will happen. And that makes it kind of, it works out of the box. We put everything into the build directory with the right layout, and then we provide the QMLDR file as part of that, which is the entry point for loading these QML files. So that works quite nicely, and I've been applying the same over case service, as we'll see in a minute. Any questions? No, that's good, because I don't know anything about QML anyway. Right, so one of the problems here is these files that are being found in FlashUSR, as I said. And if it works for you and your system because it picks up the files from FlashUSR, like in this example, service type definition, desktop files, then the test will actually work for you, because it will find the file. But then you push all of that to CI, and it fails on CI because it does not find the file, right? Because it doesn't have these things installed. So as I mentioned, you can try to replicate that with giving a weird value to exeget data deals. And then you see something like, couldn't find service type application. And that's something that has kind of made me stuck for a while. I started this a year ago, and then it kind of stopped. And when writing this presentation, I thought, I know what I can do. I can apply the Kirigame solution to this. And that's what I did yesterday, basically doing exactly the same from the module that actually defines this service type, and that needs it at runtime. We can simply copy it into the build there, under the right directory structure, point exeget data deals to that place, and then kcqa will find it, and it will be able to use it. So it's exactly the same solution with the same type of CMake commands. And I did the same in case service and care runner. So yes, a bit of application. Maybe we need some ECM macro for this. And it actually works out of the box. As an added bonus, instead of calling this in the build directory, usually it would be called share, case service type 5, for instance. Instead of calling it share, if we call it data, then it works on Windows as well. Because QStandedPaths on Windows will look for data files under the data sub directory, in the current directory. So basically, by having bin data, case service type 5, we have something that works on both systems. On unique systems, because we set the environment viable, and on Windows it works because it's there, and it's called data. It doesn't use exeget data deals, but it just works out of the box. Yeah, clear? On this, you can ask me questions. I know a lot about this stuff. So that was about making, yeah, well, there is a lot that could be said about KC Cooker and the way it works. But anyway, this is slightly different. Nowadays, for plugins, we tend to not use desktop files as much. It's not yet completely duplicated. But a lot of the work that went into describing plugins with JSON and building the JSON into the plugin itself, like Qt does, means that we don't really need the trader mechanism anymore. But we haven't completely moved away from it. That's why it had to work as well. So one of the APIs that we have with Kplugin metadata, which is all about loading JSON files to find out more about a plugin, it turns out that this code still needs to know that service types to get to the type of the properties. And that's something that's defined in desktop files. And I found out that this can be made to work simply by adding support for Qt resources to that. So instead of a full pass to a desktop file, we simply put the service type desktop file in the resource, and we give that to Kplugin metadata. It has its own parser and its own cache for that. I think that work was done by CBes. And that means it works out of the box as well. So that's an easy migration for these kind of problems. So I went through all of the modules in CI, tried to fix as many as possible. And yeah, I'm happy to have Plasma4M1 people in the room. Guys, I would like you to look at your CI. It's quite broken. I don't even know. I might be responsible for some of that if some of it fails because it can't find installed files. But I don't know. These failures are not anything I can recognize. I did fix a few things where it was looking for a service type desktop file, so now that's found. But I don't know. Too many fails in too many unrelated things. So yeah, that's kind of the most module on the next right now. Yeah, shame on you. Just wanted to say it. OK, so now moving on to outside of frameworks. My work is to look at frameworks, but I would like other people to look at everything else to make it possible to work on your module without installing it. So you should test this and fix it. What this means is, if you build everything yourself, you can do make and install so that you remove it from the system and then try to run the tests and see what happens. That's the easy. If you're using KD Source Build, for instance, to build everything, just remove your module from the installed there and see what happens. Otherwise, you can use the exige data deals hack to try that. And then you run your tests and see what happens and try to fix that. When we are at the point where, for instance, all of Plasma or all of KD applications seem to work with this setup, then we can actually toggle the CI to not install before the tests, like we did in frameworks. And I would really like to get there, but that requires everyone to help me with that. And even if it works for you locally, somehow you should check that it's not picking up installed files. So if you have doubts, use a trace, add debug information, make sure that it just doesn't work by chance. And you'll get more contributors, maybe. OK, any questions? Yes. What about files you install for other apps, like KCM plugins or Dolphin plugins or whatever? I mean, you still need to set up environment variables for those, right? Yes, if you're working on a repository that only contains a Dolphin plugin, then what you will have to do, and that's indeed not working out of the box, is to point KID plugin path to that, to the base of your plugin, so that it's actually found there. What's interesting in here is that my approach until now was to make sure the unit tests work. Because in frameworks, mostly what I have is unit tests. But when you start moving into more of a development setup where you are working on an application, application should work. A plugin, that's a good point. People need to set up one environment variable, which is already better than 10, to point there and then start Dolphin again, and it will pick up the plugin. And that's something that I guess needs mostly documentation. Maybe we could make like, so CMake would generate like a shell script that you can store or something. Could be, yeah. The problem is, it's dependent on the type of plugin. Because if you want your shell script to actually launch Dolphin, then it's going to be different if you're working on a Plasma applet, or an OKO slave, or so that's possibly a bit too specific for ECM. Are there other questions? We could put them up on a wiki page. People download the one script they need. One thing I'm wondering is that with this, the developers have to take care of two ways that your applications are going to run, like an install and install. Is it not going to make the system more complicated rather? What about simplifying the way to make install to a separate layer and only have to worry about making install or working well? Was it really unsolvable? I think it's a different use case. We could work on setting up a new layer easier. But that's still something that would get in the way of using an IDE and I press Run, and it's not running my tests. That intermediate step, you would have to configure in the IDE, whereas with this, you don't really have to. And as you can see, I tried here to make it so that in most of the cases, the frameworks have to provide the right tools. But if you're working on an application, you shouldn't have to do anything. If you make a simple or even a complex application using the frameworks, the libraries will be picked up. The unit sets will be picked up. Everything will work out of the box there. You don't really need to do an extra effort as a developer. You only do that if you start to think about separate repos with plugins and all of those kind of things. But if you're working on an application, it will work without being installed, which I think is the main purpose. That's nice. But then, for example, all data files have to be bundled into the binary using Qt resources, which means if you're working on data files, for example, because you are an artist or something like that, then you need to relink your application every time to test it, as opposed to having a make install which would just copy the modified file. That's true. But nowadays, I think when people write applications and then they think one day I might want to deploy that to Windows and Android and all of that, then actually having your data files in the resource makes the deployment easier down the line. So it's a bit of a trade-off, but I think it pays because I think we see less and less separate files being installed and the resource system being used more and more. But of course, if you have artists who complain about it, you would make it an option and whatever. So I wanted to remark that with this feature that checks in the current banner directory for the files to execute, at least once in Fedora, we've run into a problem where then it would, for some banner that is normally in the LibExec tier, it would be looking for it in user bin and then it would be running a different binary. I think it was a KD4 or even a KD3 one instead of the one that it was supposed to run. And then so you have to be careful when you run things from the LibExec tier that you should exclude the user bin when you try to check the current directory because there can be old things or things which just happen to have the same name. And user bin is never the right place to run when it's something that should be in the LibExec tier. Yeah, that's a good point. It makes me wonder if we could easily sort of detect I am running from a build here instead of from an install there, but we don't really have a heuristic for this. So what you're suggesting is to kind of hard code, is this user bin? Then skip the first check, which doesn't seem really correct, right? So I'm wondering how to do that best. I wouldn't even know how to get started with that, right? Signing Linux binaries, I don't know. I'm kind of hoping that we don't get into that situation anymore because we have moved away from user bin all the steps that was kind of version dependent. But yeah, it might still happen. OK, one last short question. Thanks, David, for your talk. A question of mine is, do you see any future opportunity that a certain program application can tell you what is touching? Which configuration it needs, which data it accesses, so that you can debugging quite more simple or even transfer these resources to other computer's users? Yeah, we don't really have anything for that. There is the very low level tools. Like Strace, but then it will look up shared libraries and system and whatever. So on top of that, it would have to be tracing in Qt, which is actually worked on. That could actually help. Because if you want to get the bug fixed, first you have to understand where to look for resources data, what went wrong, what was accessed. And until now, at least for me, it's poor guessing to come to the point. As I said, with Strace, you don't really have to guess. You just have to use it wisely and go through it. I'm hoping that this is the kind of thing that trace points in Qt will actually help us with. You could enable the Qfile trace point and get the list of files that are being accessed through Qfile, and that could help a bit by being sort of one level higher than Strace. Don't know if that would help you. Perhaps we will discuss later, private. Thanks for this explanation. OK, thank you, everyone. Thank you, David, for your presentation. Thank you.