 So before the coffee break, we have one last presentation by Kevin Ottens and David Farr. The topic is looking at that application developer story, more elitism, anyone? Hello everyone. We thought it would be a good idea to tell you a little bit about some ideas we have on how to make it easier for contributors to get started. In other words, this is going to be a talk about vaporware, just ideas, right? It's not like we have something, some revolution software to present, but just some brainstorming and the goal of it is to get the brainstorming to continue in a buff session where you guys can tell us what you think. Actually we have no idea what we're talking about, we just pretend until you're convinced. So basically the plan for this talk is let's have a look at how people get started nowadays hacking on KD software and what would be the ideal developer story, how we would like it to look like. And then we'll have a look at the other ecosystems, see how other people solve that problem and then what we could do about it. So one of the solutions if you are a new developer and you want to start hacking on something from the KD world would be you use your distributions packages so you have to figure out what to install, right? The full list of developer packages you need and then clone something, compile it and install it and then you can test your changes. That kind of works for standalone applications by which I mean applications that do not require a lot of very recent libraries from the rest of the KD software. And then if we have a look at what Qt does, that's exactly what they do. On their wiki page there is a list of developer packages for all sorts of different distributions. Since I work with students as well, the part installs the right developer packages and then running CMake until it actually pass. Not this year, the year before took them two weeks, right? Because since we don't document that with the various distros and the various creative names for the packages, right? They had to figure out that, I would buy themselves. Yep. So this is the list of problems related to that. It is unique only. It requires an exact list of packages and if you think about it, there are many distros and many applications. So if every application needs to document the list of packages for every distro, there's a lot of different lists. And also I was trying that with a friend of mine who wanted to start hacking on K-Mail and basically in his distribution, the PIM libraries were too old for even the stable branch of K-Mail because all of that sort of moves together. So the whole approach does not work if the application requires other leaps from Git as well. Unless, of course, your distribution is recent enough and that would be the case with NeonDevelop or with the open-suzer effort called Argon or Krypton, both of which give you also basically packages built from Git. So that's one solution. Obviously the problem with that is it requires you to, well, switch to another distro if that's not what you are using or using the Docker files as was presented earlier this afternoon. On top of that, one of the problems with that is that you have to install before you can run the application. And that's something that has been... So the problem with that is that you're basically messing up your system with something you just hacked on and then the day after that you have to, I don't know, work on that computer and it's broken and it's a problem. So we would like to make it possible to actually run applications you just built without having to install them and that is tricky. So in some cases I had an intern recently working on some KDE software. What I did for him was to simply automate the install step in the IDE. So when he would do build, it would actually install and run the thing. But that's only possible if you don't install as root. Otherwise, and the ideal solution would be to find ways to run apps without installing them and that requires looking at a number of different problems. One problem is different files. We want to look up on the file system like XML, GUI, icons, all sorts of data files. That is relatively easy to fix using QRC resources. The support for that has been added already to XML, GUI and to the icon theme frameworks. So that's already something that is available and a lot of apps should actually be ported to having the XML GUI file in the resource, for instance. This also helps with deployment on Windows and other kind of situations. For plugins, it's a lot trickier because if you actually build plugins, you want them to be looked up in the build. And I don't know any other solution than setting environment variables for that. And then if you build helper binaries, you also need a way to find them that requires being done in the code, I guess. And so on. It's a problem that's not fully solved, but we have made progress compared to, well, some time ago. And working with students, again, if you don't solve that particular installed state, add an extra week to what I was talking about earlier. So you're three weeks down before the guy can make his first patch, right? At that point, he is very motivated to contribute. So if we forget about all of that, another solution is, or rather, to solve the problem I was mentioning earlier with, I want to contribute to KMail, this requires a lot of recent libraries. One solution could be, okay, I'm going to install the developer packages from the distro up to KF5. And then on top of that, I am going to compile all of PIM, or all of Workspace, or all of apps. At that point, you need a tool to automate the compiling of all of that stuff. One solution is KDSourceBuild. An equivalent solution on Windows would be Kraft, which is the new name for Emerge, sorry. So that is kind of a solution, but of course it requires learning that strange tool, KDSourceBuild, which does not make it easy for new contributors. On top of that, well, you get to compile a lot of stuff. As I said, it's hard to get started with KDSourceBuild. And again, you have the choice between two bad choices. Either you need to install as root, or you start installing stuff into a different prefix, and you need to deal with that and make sure it's a layer on top of the rest of your installation, which requires a ton of environment variables. So another solution to avoid the whole thing about having different layers and installing as roots is to just compile the full stack as user into a custom prefix using KDSourceBuild. The good thing about that is that, well, you eat your own dog food, right? You discover bugs before anyone else. So that's good and bad if you use that laptop for work, like I do. But the good thing about that is it gives you the ability to debug any second, any application. If I find anything that doesn't work the way I want, then I would just go into that, and it's already built on my machine that makes it really easy compared to, oh, wait, I need to install new stuff and maybe look at this bug and forget it, right? So that's my solution. I forgot to make you people raise your hands. Who is doing development of one repo based on distro packages? Raise your hand if that's the way you develop. Okay, about 10 people. Who's building the whole set of repos for PIM or apps or workspace using KDSourceBuild? That's actually a more common workflow. Well, almost equivalent. And then who's building everything using KDSourceBuild? Nice, that's actually the majority or more than the other solutions. Cool? So obviously, you know the problems with that. It's an even larger compilation, right? You get to compile another 400 repos and you need a ton of environment variables to start with all of that. All of this is fine for some of us people, but obviously if you're a new contributor this is quite overwhelming. So ideally what we would like to make it possible is for people to just say, okay, I'm cloning this repo here, I want to build it, and it should work even if it has to install dependencies first or it should be done in an environment where the dependencies are there. It doesn't matter, it should just work to clone, build, run the test, run the application. That's kind of the common line story or if you do that in the IDE then you would get cloned, open in the IDE and click run. And with no install step? Yes, as you can see there is no make install step in there. And this is where Kevin is taking over. So recently I've been playing with Rust for no particular reason. Just out of curiosity, and they have this build tool that they use for everything, which they name Cargo. And when you develop or want to contribute to anything with inside of the Rust ecosystem that's basically you get cloned, you call Cargo build, you run Cargo test, run the test and you run Cargo run to run the stuff you just got. That's pretty much what we're after. And they have that consistently on everything in the ecosystem. So that's really something we would like to have and we would be happy conkey in that case. And so the way it kind of works as a developer is you just provide a Cargo to ML file and it's a very simple inelike format. You specify the name of your package, the version, the authors, this kind of metadata and then you just specify the dependencies and in which version you want those dependencies. It's able, here I'm using equals in that particular example but you can say 0.3 or above, right, or whatever. And then by convention if it has an src main.rs file, by convention that's an application that's src leapfiled by convention that's a library or crate that's the name they give to this kind of stuff. And then straight from Cargo you can actually publish the result of your work so that it's easy to find by other developers, right? So but it's publishing mainly the metadata, so where the repo is, who is the author, the actual version and so on and so on. And that's done with surprise a command which is Cargo publish, right? And once you've done that any other developer using Cargo can see your crate, right? And can add it as a dependency and then automatically Cargo will do the right thing for him. And by the right thing what happens in the Cargo universe is basically when you have the run dependency as we had before it basically give clones for you from the right place to it. So that don't load the source code and that builds, which is not necessarily ideal, right? And so that asks the question of is there something similar for C++? So I didn't quite play with that one but I just researched and said okay do we have something and then turns out that yes, there's a tool which is named Conan which is the equivalent of Cargo but for C++. It's many same principles it's slightly more complicated because C++ and one advantage in my opinion it has compared to something like Cargo that Cargo builds really everything, right? All the dependencies you might have all the dependency treat will get everything, build it and statically link to it, right? Obviously in the C++ world that's not necessarily what we want we want to do some dynamic linking and we don't necessarily want to build everything under the sun. And so in Conan the guys would develop that, actually accept that and there's a mechanism which allows to have pre-built binaries for your different Conan packages and if you are in a situation where the dependency or request is an already known configuration for the build then you get that binary instead of building everything from scratch. So I'm not getting into details there but there's a mechanism to actually generate for different configurations and publishing that so augmenting the chances that you don't have to rebuild when you need it. And how does it look? Pretty much you have to produce a file specific for Conan that looks a bit like any file and you can specify what it requires, same thing here you can say okay I need to depend on boosts an exact version or from I need to depend on Zlib but something above that particular version. Since that's kind of recreating the situation where you have the guy doing the code and the guy doing the Conan package several people could do the same package for, could do different packages for the same dependencies so that's why you have this at something that's at the user doing the particular package you interest in and the branch of that particular package. And then Conan has generators so that generate files on the fly when it downloads and builds the different dependencies and turns out that one of the generators is Cmake. Right? Turns out that there's more than that you could target Xcode, you could target Visual Studio, Cumec, or Cubes or Scones, right? So it already supports all of that well I'm not necessarily thrilled about one of them but interestingly that's not on the slide but interestingly they also have generators for IDs or at least for generating config files for to a low easy code completion because then you have to find where everything is installed to have the proper completion so we could imagine having one for instance for integrating in KDevelop directly for the ID side or something like that and then that asks the question okay I'm using Cmake and I, nice I did the file that I just described and how do I make sure that when it's built I actually pick the dependencies around, well you just have to add those four lines inside of your Cmake file okay and basically you're done once you have that Conan will do all the work so that you have the dependencies in place and the Conan basic setup will basically add all the flags necessary to build something so that it finds when you do a find package it's completely transparent right when you do a find package it will look at the right place okay, right places actually and so in that case you basically end up doing the git clone because at Cmake then you create the build directory or not I generally prefer to do it get in that build directory and instead of just running Cmake.dot you have to run Conan install.dot which does the first phase of generating whatever Cmake expects for that which is the Conan build info.Cmake then you run Cmake and that we won't find anything from your system that will peek inside of your Conan install there and then you can make your build with Cmake and then you can run Ctest and run your binary well you can run your binary assuming that we solve this Cmake install problem right but if we solve this Cmake install problem we have something which is somewhat much more compelling than before you don't have at least you don't find all the right dependencies and so on because it's documented by configuration files and code inside of your repository so that's even tracked in version if we don't solve the make install problem then we basically have to deal with layers with environments and you end up with n plus two layers because at run time that means you would have to find whatever you depend on in the system and then whatever plugins and assets which are in your Conan dependencies and you have n of them and then whatever plugins and assets you have in your current application ok that's a problem you don't have if the make install problem is solved alright and I won't get in details in how you publish libraries that you can depend on that's slightly more involving than with cargo in that case basically you write a small recipe which is on-screen with particular API a few interesting things is that it can be tested locally before you publish it so you can pretend that would be already in the Conan database of metadata and try to build stuff that can be also checked on the CI so if you make this kind of recipes they have all the tooling in place so that you can push that and the CI picks it up and then it will check it and we have strong indications that in the case of the KD frameworks we could generate those recipes in most cases we could generate those recipes from the YAML files that we have we might want to make them slightly richer to make the dependencies more explicit but then we could auto-generate that ok if you want to know more about Conan so there's documentation for applications so that the URL there for libraries just below and so we could imagine having the for if you want something like all of KD frameworks we could imagine making a meta package for that so like a virtual package which would just depend on everything or same thing for all apps or all PIM or we could imagine having a much smaller KD SRC build which would do the clone of the code of what you need and then just run Conan in everything ok so the pros and cons of using something like Conan well that means that we would have clearly locally defined dependencies which we never had before and that could make the work on the CI actually easier right because right now we have this kind of situation where we build everything together in big packages in part because we don't quite control the dependencies and of course it works with transitive dependencies so I don't need to specify everything I depend on just what I depend on directly and the other ones are pulled well that's what you would expect from a package manager and so the cons is if that dependency is as plugins well that plugins will end up in some folder controlled by Conan right and so that makes it peaky to fine the other cons is that it doesn't have situations when we have conflicts with the currently running run time right of your current workspace like KDE or Akonadi which are when we made that slides we try to list a few and then we realize that's really easy to main ones right so obviously if you have something if you're developing something which needs to be run inside of KDE or one of your dependencies has then you might have a slight problem there it's not necessarily the most common but that happens okay and then if you're in that last case right and you have a problem with run times that's when you basically end up having only solutions with containers so being docker approach as we've seen or flatback approach which are fairly similar in some way we pick flatback for that particular talk just because there's already integration in KDevelop right so flatback for those who don't know that's used also by GNOMEBuilder and by KDevelop since recently so for flatback we would have instead of having something fine grained as we've seen before right we don't need to solve the dependencies and make them explicit and so on so we just create those big fat containers so one for the platform one for the SDK that's where you get all your dependencies and then you just depend on that right and you run your application under development on that you would have so ready to use containers and we can imagine in the case of PIM which is probably complexity wise worst case scenario having a PIM layer on top of that just a slight example of the manifest in there so that JSON5 and then you can say okay in that in the SDK layer you can specify the flags which build system you want to use and where to find the sources version and yada yada so using flatback for KDE apps how would it look right so if we assume that we would have a flatback manifest in all of the project then you could get clone and then download the flatback container corresponding to whatever you just cloned and build and install the application you just cloned right of that particular container the application would already be there but if we assume we didn't solve the make install scenario you just make install and overwrite that in your local container and then you can run the container so that's the type of situation so the pros there is that we already have the KDevelop integration we can have all the dependencies pre-built and you can get started with one click if you use KDevelop I have no luck for you you will have to learn the command line tools or have to make a wrapper at that point it's independent from the running system whatever this row you have that's fine and then it's the solution which actually supports complex run times like for the KDD or Akonedi situations now it has problems as well why it's linux only which is a bummer because if we want to see more for apps on Android if we want to see more for apps on Windows well then Flatpak is a no go okay the other disadvantage is that well that's less finely grained than something like Conan so you can imagine that you will have a very large payload to download for each of the apps we work with and also Flatpak has a very primitive dependency system so that's in part why we end up in part container instead of having plenty of containers that can depend on each container can depend on only another container basically have only a linear dependency there and currently that's tied to the ID which I pointed out and of course there's the inner complexity of the containers themselves because you're not running directly so if you're not using an ID hiding that complexity from you then that's not ideal and that leads us to a conclusion, you want to do that one? so if we think back about everything we just said it seems like the best strategy is to first work a bit more on this making install problem making sure that we can run apps without installing them and then if on top of that we go for Conan then we have something that takes care of downloading the dependencies so that they are recent enough and pre-built so it's kind of ideal for getting started quickly and the only problem with that is the interaction with your running workspace so the solution for people who want to completely separate the two and not mess up the running workspace is something like a container like flat pack or docker for the alternative solution so that's what we think would be one way to go one reason why we layer it that way that I mean the make and study is number one trouble maker in everything we do right so if we solve that one then suddenly for 80% of the applications we do Conan is viable okay and that makes it viable and that doesn't prevent us to get on windows and so on and then you're left with the 20 remaining percent where you have to depend on Akonadi and Akonadi is not easy to port yet or this kind of situations then in that case flat pack becomes the fallback scenario right but at least we suddenly make a much better and more compelling solution developer story for 80% of our apps and it also helps for people looking at deploying our apps on windows or even OSX because basically an OSX bundle is also something that's supposed to be self-contained so you want to find everything in resources rather than somewhere on the file system so it all kind of fits together with these kind of solutions but plugins are kind of a hard problem in there more thinking required so for following up on that if you want to provide us with input we have planned a buff which is tomorrow at 10.30 and if you want to know more if you want to know more about flat pack you can go back in time and watch Alex's presentation yesterday or you can use a recording which might be slightly easier so that's it from us do you have any questions one comment one question first the comment which is happy everything flat pack has another side effect which is that it would make deployment really easy few days ago I had to write an app and I had to package it for a couple of distributions I wrote it in what was it? for the packaging it took me two days with flat pack this was well I ended up using pretty much a kind of similar thing so that's a really nice side effect because distributing our apps right now is how question I really enjoy your talk and I had a deja vu of Randa 2000 something frameworks, many of the things of the problems we already came to those we really analyzed those problems well nothing happened are you guys planning to work on this? is it in your agenda that's what happens this talk is very pro where we're just trying to get you convinced to do something I think it's something where if we all realize the importance of the making so problem then anyone of us who hits the next problem will say we've discussed it is worth solving that actually has changed since Randa 2011 because especially the people working on Windows and macOS deployment they have already made quite some good progress in that area and just supporting that effort and possibly even helping with that is I think a very important way forward so yes personally I will be having a look at more precisely what else has to be fixed than XM and GUI and so on and so on and that's also in part in my opinion to point out where our personal hygiene is not ideal and where we stink because then when you work on something you cannot that's based on that that's not ideal and I'm looking at it right now and it doesn't cost me much to sort that tiny bit right now so it's also to raise your awareness because then we can slowly move there was there a reason why you went with flat pack say versus app image or snappy for example could you evaluate those and look at those well we can bounce that question back to Alish because basically we had to look at flat pack because K-develop supports it and we were looking at the developer story so we picked the thing that K-develop supports so why flat pack that's for Alish wait wait we need the so yes I've looked into all of them and actually like app image wouldn't really be a solution in this case because app image doesn't really have a concept of recipes and part of what they want to do is actually be able to actually compare it in the first time like they do in common current plus app image would be a solution but then it needs some kind of integration snap and flat pack are two formats that we're looking into and both of them would work the reason why I implemented flat pack but snap is because flat pack works on my distro Arch Linux and snap craft which is the tool to build snaps doesn't work on Arch Linux at least I don't know how to do that also another possibility that we looked into was DACA DACA can be reasonably easily used to both create well petrol of the dependencies and create the app images but then what they want to do here is actually to compile them locally so that if they want to modify something they can go through the whole development process so probably wouldn't be the right solution so flat pack at least it's the thing that at the moment we can consider more global at least for Linux like they said and from my perspective picking one that's really a matter of what is getting traction for deploying to users and right now there's a bit of traction to try to do that with flat pack and since we are a free software community like any other I'd rather reuse something which is partly done and just bend it to my will for another use case and restart from scratch maybe not so much of a question more comment so in the Qt company we have been using Conan a bit we're using it for dependency management in the CI for WebKit and we had interactions with the Conan developers it's pretty positive they're nice it comes with its own set of troubles like verification and so on but it's trying to be the NPM for C++ so I personally would really love having the frameworks in there and yeah Qt will be there eventually I think and so it's kind of orthogonal so I think it's definitely worth pursuing also to make visibility of frameworks bigger so I really applaud that idea because suddenly we would have plenty of companions there's tons of C and C++ libraries which have been packaged for Conan one last question that actually sounds like a reason to publish frameworks over there even if we don't use it for ourselves just for the PR part of it and the convenience we're getting the plugin problem with running from the build directory we kind of have that solved in GammaRay the trick is that the build directory needs to have the exact same layout as the installer directory which is a few lines of CMake code and then we have code that in the library doing the plugin loading that can localize the absolute path of the library itself and then you basically just at the from there you know where your plugins are based on the relative install paths so you add that to the but that's the easy case that's the case where you are building plugins for your own application then you control the plugin loading and you can add extra search paths but what if you're building a Qt plugin or a KMA plugin then you need it loaded by some code that you don't control and then apart from environment variables I don't know any better solution but yes this solution is valid for those cases yes Thanks Kevin and David and with this we end this session and there will be a short coffee break after that we'll start at 18 now 17.55 sorry Thank you