 Here I am. Hello everyone. So this is going to be about KIO. I realized a few days ago that some people have no idea what KIO is, which is interesting to me, but actually this talk is more about the old jobs and the new jobs and what is changing with the new jobs and a bit of movement over there, and you will get an opportunity to give me some design input at the end because there are some open questions. Right, somehow I should be able to get to the next slide. There we are. So the template has an about me slide, so of course I have to, you know, put a dinosaur picture in there. I've been around for a long time and KIO actually comes from the original file manager in KU1 called KFM. It was extracted from there and then reused. So yeah, very long history in there and this would explain a few things in this talk. So actually one of the things that is important to start with is asynchronous jobs. The K Coradens framework defines a class called KJob which allows you to do work asynchronously. So as the user of the job, you would create the job, connect to the result signal, and then start the job and the job can do whatever. The whole point of that is to do things that actually use signals and slots and even loop and take time. It could be timers and sockets and processes and whatever. And at some point, the job is done, your slot is called and this is where you do the error handling. The one thing I would like to draw your attention on is explicit start. At the KJob level, you have to start the jobs explicitly, just like you would, you know, you start threads explicitly, you start jobs explicitly. It's kind of consistent there, but we'll see that for historical reasons. This is not the case in the old KIO jobs. Right, so this is not about threading at all. This is all about stuff that runs in the main thread, let's say. It's more about going back to the event loop, waiting for things to happen, and then signals get emitted. It could be combined with threads, but that's not the point, right? All of it is mostly used without threads. Right, so this is how we use a job. It's used for many different things. In K Coradens itself, we have a list-open files job. This is the thing that tells you you cannot unmount this USB drive because you still have these processes running and using it. This is a job around a key process, which itself is asynchronous. Very often we have jobs around sockets, direct news of sockets, or Q network access manager. For instance, you would find that in Purpose, the YouTube job, for instance. Plasma uses jobs for doing blocking operations in a thread. Then the job itself does not block. It tells you when the thread is actually done, which is a nice way to solve the problem of how do you do blocking operations, right? Without blocking the GUI. Of course, the one that is closest to this presentation is the jobs that we have in KIO, where we want to list files on an FTP server. How do we do that? We delegate that work to a separate process called an IO slave, and we talk to that process using a local socket. This is, again, a job around socket usage, except that it's not the socket to the FTP server. This is the socket to the IO slave, which itself has a socket to the FTP server, because why make it simple? All right. One of the things that is actually useful there is a job is very often a composition of other jobs. K-coordinate also has a class called K-composite job, which is a job with child jobs. It doesn't do very much. All it does is keep a list of child jobs, and you can add and remove, and it can propagate errors. In your own job, when you derive from this, and you create subjobs and decide what to do with it, it's your own choice whether you want to have child jobs one for the other, what I call serial, or parallel if you have to, if you can do multiple child jobs in parallel, and then you're on and done when all of them are done, or it could even be a mix of both where you have one job, and then when it's done, you have three jobs in parallel, and so on. This is completely up to you, up to the subclass, which can be a little bit confusing when you use this class, because you might assume it does some things automatically, like when all of the child jobs are done, then surely we are done. Well, not necessarily, so it doesn't know, so you have to do that. So the way it works right now is that it takes care of errors, and it will stop the parent job in case of an error, but you have to handle the non-error cases. For some reason, it also doesn't handle suspend and resume and kill. This is the kind of thing that could actually be added there to make it more convenient to write composite jobs. This is basically based on the composite pattern, one of the well-known super-sposed patterns, where a composite job is a job itself, which means we can nest them as much as we want. Right. So back to KIO now. KIO uses jobs for everything. It's basically the API we have for delegating work to IO slaves. One of the historical things is that the auto start unlike K-Job. The KIO jobs came first, and then K-Job was extracted from that by Kevin, and then he did the right thing. In K-Job, it's what you would expect. You have to start the job. But in KIO, for historical reasons, they just auto start magically, which could be a little bit confusing if you're used to K-Job. So there are many jobs in KIO, what we call simple jobs like NKDIR, where all that the job does is create a slave or use an existing slave, send one command to it, wait for the command to be done, and then the job is done. It's very simple. Same thing for file deletion. It's really just one command. But there are some more complex jobs. If you think about KIO copy, which can copy directories or even so it's one file or many files, one directory or many directories, anything you can drag and drop in Dolphin, then it has to do a whole lot of work figuring out, listing the directories, figure out what we have to copy and checking there is enough space for that destination. And then it starts creating directories. And the same job can also do moving. So all of that has to be handled in there. It's a rather complex state machine, which is all based on subjobs. Same thing for deletion. It can handle directories. So it has to do a lot of work there. But I think the whole design has proven to work rather well over the years. Now, the black beast in all of this was Karen. This was, because the whole point here is we are killing Karen. So Karen used to be about many, many things. Find what a URL is. Right? You click on the link or you type a URL somewhere. We have to figure out what it is. What's the main type? Is this the PNG file? Is this the directory or whatever? Once we know, we can open that file in the associated application. It also implements in there the security check that the desktop files have to be executable or they have to come from a well-known directory like user share applications. So that's part of it. It has to trigger a startup notification. That's the bouncy stuff you get in your taskbar. It has to handle special cases like scripts and executables when you click on an actual executable. And then Karen had some more bloat in there because it was subclassed by KHTML and Conqueror and a bunch of other places to be able to say, OK, once we know what this URL is, instead of opening it in an external application, let's just embed a k-part component instead, basically what Conqueror was doing. So this is a useful feature, but it was just too many things for one class. On top of that, because it wasn't enough, we also had static methods in there to run any kind of shell command to launch an application once we know the MIME type, which is really just the second part of Karen. So if we know the MIME type, then we don't actually need to start a Karen job. We could just say, OK, find the app and start it. If we actually know the app, then we also had a method to say, run this application. And then if there is no associated application for URL, then we had this open-width dialog, so more methods there. Now, that was a lot of things for one class. And also, this was all very tied to Q-widgets. And some among you who like QML said, this is not so good. So I spent quite some time to rework all of this into separate jobs that also don't have any relationship with Q-widget. So let's have a look at how that was done. The very first step was extracting the common launcher job, which is this is a shell command as a Q-string. Please execute that. Sounds simple enough. This is a K job, so that you get, basically, the job finishes once the command has been started, not when the command finishes. Because maybe that command starts Firefox, and the user is going to use Firefox for the next eight hours. The job doesn't run for that long. The job simply makes sure that we manage to find whatever we wanted to start, and it seems to have started without crashing. And then, OK, my job's done. So all we have to do in there is start up notification and error-handling. So this one is simple enough. It's a good thing to have it separate. And that means we can use this asynchronously and without any Q-widget. How about we want to start applications? So things with desktop files. That's where you use application launcher job. You give it the desktop file to start. Actually, desktop files are modeled in our code by the class k-service. So normally what you do is you look up the k-service for that desktop file, which uses a cache, kc.occa. And then we pass that to application launcher job, which can then start this application. It has the code for the security checks. It's able to pass URLs to the application, obviously. That's one of the important things. And just like Karen used to do, it has this logic where if the application supports only one file at a time, and it says so with %f or %lowerkzu in the exact line, then we have to start multiple instances of that application if we want to open multiple URLs. So the code is in there to say, OK, it doesn't seem to support multiple URLs. Then let's start multiple instances of it. And then the user gets spammed with 20 windows, but that's what you ask for. But obviously, if the application does support multiple files, then it uses uppercase f or uppercase u. And then we know that it supports that. And we just give it all of the URLs. It has support for temporary files. This is a less well-known feature. If you call the setter that says these URLs are actually temporary files, then we can pass that to the application. And if the application supports it, which some of the KD apps do, the app itself will get a common line argument that says dash, dash, 10 files. And it will delete the files once it's done with it. If the app does not support it, then we try to handle that in the job and do the addition ourselves after the app is done, which has issues, right? The process could return immediately because it's live-office and it's already running, and then deleting the files is a problem. But that's the idea. And also, if you create an application launcher job without any desktop file or K service, so basically you don't know what to launch, then the OpenWith dialog will come up and then go to 20. We run the application and do all that logic once we know what to run, right? OK, so that's the application launcher job. I think it's a useful one to keep in mind. That's when you know what application you want to run. What about the general case of I click on a file in Dolphin or I type a URL somewhere or click on the link? Then we are in the more generic problem of how do I open this URL? And for this, I wrote OpenURL job, which is basically what the Karen constructor used to do, right? This is a URL figure out what to do. It has support for external browsers, just like Karen. I didn't list all of that with Karen, but this is the case where the user says, I hate this conquer stuff or Falcon or whatever. I want Firefox for all my HTTP URLs. And then that's a global setting. And if OpenURL job is given an HTTP URL, it won't even try to figure out what kind of file it is. It will just pass it along to Firefox. Done, right? Otherwise, we start using KIO to find out, is this the file or directory? And then what is the mine type? Unless the application gave it to us. You know, if you have the information, you can pass it along to OpenURL job. And then we don't have to determine what it is. But if we don't know the mine type, this is where we figure it out using KIO and KIOstat or Get. It has to have support for the special cases where you click on a desktop file, you click on a shell script, you run an executable, all of that. In the general case, if it's opened this file with Oculus, then we simply call application launcher job to actually start Oculus, right? It builds upon the other job. So that makes it a bit nicer, more modular. This job is only about figuring out how to open the URL. Now, this was the job side of things. As a user, we want to get some feedback, a progress when we copy a lot of files, support for showing error messages. The way that we solve that problem, the problem of, I'm not going to show message boxes from a core-only application. Same thing in K Corridance, where K job is core-only. The only thing it has is an error signal, as we saw. So it also supports delegates, where you can create a delegate and assign it to the job. And the delegate has special methods, like show error message, which can be re-implemented as widgets or QML or embedded in your, in Dolphin, it's not going to be a message box, it's more like something that is actually embedded into the QE. And so that you don't have to connect to the result signal and core-shower message, there is this auto-error handling where it just happens. And there are subclasses of that for widgets, the K dialogue delegate. And for non-widget apps, what we like in Plasma itself, for instance, we use notifications as a way to show errors. So this is how you use them, trivial, you just create it, set it, and you can enable auto-handling here. It's not auto-error handling, because it's errors and warnings. I'll just move on. In addition to that, in KIO specifically, we need extensions. How do you ask the user, when this file already exists, do you want to rename it? There was an error with this file, do you want to skip it? I am really sure you want to delete that and so on. So that's what the delegate extension is for. And that's for the old jobs, the copy job and things like that. For the new jobs, I had to add some more interfaces, one for the open-width dialogue, where we prompt the user for an application to use, another one for the open or execute dialogue, where this is a script, do you want to read it, or do you want to execute it? And then the untrusted program handler for desktop files that are not executable. So these are three more interfaces in the core library re-impermented in the widgets library to use widgets. So to tie it all together, what I did was the job UI delegate, which is a widgets-based delegate in the widgets library. It inherits from the UI delegate, and it implements the extension, the old extension for copy job and all of that. So that's what the situation was already, and what I did was these three new interfaces, ideally it should inherit from them, but you can't do that in KF5. So instead of that, there is a bit of a hack in KF5 where when you create your first job UI delegate, it will register all of these interface implementations in static pointers in the core library as a way to get to them internally later on. And if you link to KF widgets, you get a delegate factory for all of the old jobs, which means if you don't do anything, that's the whole point. As an application developer, you just create the job, and you set the job UI delegate, and that's it. It will provide the extension and the three interfaces in there. So that makes life easy for the widgets applications, also for historical reasons so that things keep working. But it's not that easy for QML-based applications where you need to, you know, how do you provide implementations for all of that? And that's the kind of open question at the end. So the conclusion is Karen is deprecated, replaced with these three jobs. But when you use them, make sure you pick the right delegates. In widgets applications, you can use job UI delegates, which does everything, right, errors, and progress, and open with dialogue, and all of that. If you don't like widgets, then you can use the notification UI delegates for error-handing. But right now, there isn't any open with dialogue for QML apps. There isn't any desktop file security dialogue for QML apps or the open or execute dialogue. So if you're simply using a common launcher, that's fine. But if you use application launcher or open URL job, then something is missing in there. And I'm calling for people who care about QML to sort of finish that job. The problem is, how do you set all of those interface re-implementations? Do we want four different setters for every user of the job? You would have to say, oh, if this happens, use that implementation. If this happens, use that. That sounds a little bit verbose. But otherwise, we get into the combination problem of, I want notifications, and I want widgets-based dialogues. Or I want notifications, and I want quick, quick-based dialogues. So there is a bit of design to be finished here. How much of separate setters do we want? Or how much of all-in-one solution do we want? So I'm open for ideas about that. But that's it from me for now. I hope you learned something. Otherwise, I'm sorry. And I will take a look at the question. Or actually Adam will tell me the questions he said. Yes, correct. Thank you, David. So we have a couple of questions and a few minutes to spare. So is there an overview at runtime of running KJobs and KIO jobs? The jobs don't really register anywhere in the core library. So you can't have some sort of a give me the list of the running stuff. I suppose the question is about debugging, because if your application relies on this, well, it sounds something that could be useful for debugging and also for unit tests possibly to make sure that all the jobs are actually finished before we move on. We had something like that in Zanshin. It's something that doesn't exist and could be added. For the users, on the other hand, that's where progress notification is important. And that's something you get with the UI delegate. I didn't talk about all of it. There is a mechanism in there for the jobs to actually register to Plasma. And that's how you see when you copy your file and you can get the progress in the Plasma notification things, because the job actually registered there. There are some diverse goals to make that happen. Or of course the application itself can show progress. So there isn't an all-in-one solution for that, because we want to leave it to the applications how they want to present that information. Some jobs are actually internal, and there is a flag hide progress information so that you don't bother users with whatever is happening behind the scenes. Some auto-save or internal things you have to download. But vice versa, if you do want to show progress to the user, you can actually enable that. OK. Should we get rid of KIO job auto-starting behavior for KF6? I knew this would come in. I present something as inconsistent and historical when people want to fix it. In my opinion, this is a very dangerous one to tackle, because we have no support from any tool to tell us this job isn't started. So we'll just forget to add the start call everywhere, and we'll end up with jobs that don't start. On the other hand, what we could do at least is make sure that calling starts is OK. We don't end up with a job-starting choice. We'd have to check so that we can actually start calling start explicitly everywhere, or at least allowing people to do that and be consistent. And then we still auto-start for people who forgot to do it. And yes, we could, of course, have a lazy check for these kind of things. But I'm not sure that someone is going to run lazy on all 200 repositories and make sure that it's all clean everywhere. That sounds like a lot of work. But if someone wants to do that, what I can do at least is make sure that start is a no-op, or more precisely, that it works in both cases. And then we can allow people to do things clean, even if it works if you don't. It's just like the emits keyword, which does nothing. You can write your code without emit. But everyone does it. We could do the same here where, yes, you can write an old KO job without start, but everyone does it. Let's go into that direction. OK, thanks. And then someone asks if KF6 is a good place to remove slaves. Is that because of political opinion against slavery? I assume so. We have that in the past. No, I assume the question is more about the use of exceptional processes versus the use of threads. One of the ideas we had was to at least experiment with running the slave code in a thread. I have this to-do lying around. I didn't get around to it. That's an idea. One of the issues with threads is it's hard to kill them. They have to agree to be killed. In my presentation about Qthread, I said, yes, there is a terminate method, but forget it exists. It's very unclean. You can leak memory and leave mutexes logged and all of that. So the good thing about external processes is that they can be killed. And that's convenient when, for instance, a socket is hanging because you switch networks or whatever and you just never get the socket back. So I think that's one of the benefits of external processes. On the other hand, we could use threads for those to-do local slaves, which simply forward to the local file system and are just here to present things differently. Those are safe enough, and there is no socket that can hang, so we could at least start with that.