 Hello and welcome to the next talk. It will be about Klick. My name is Kupp Pfeiffer. Some of you who know me still think I'm the main developer or spirit behind Klick. That's not true. I was just blocking a lot about it and I helped Simon a little bit with the work. Simon Peter is the main developer of it. So maybe you want to say something. Hello, can you hear me? Yes, great. So how many here have ever heard of Klick before? One. Okay, a bit more than one. Five. Has anyone ever tried it? Two. Okay, great. So it's very important that what we are showing you right now is not what you see on the actual website at the moment. We're showing you the next version Klick 2, which is currently being prepared and there is a SVN for it, but it's not what you see on the website, just to not have any confusions on that. So basically I started Klick in 2004 and everybody laughed at the crazy idea in the beginning. They told us you don't understand what you're doing. And now it has changed a bit, but let's look into it. What's the situation today in packaging? Well, every distribution is its own world, basically. You have all the different distributions, as you know them, and they are all one system each. In fact, applications and the system are melted together. They are all the same. They are managed centrally by the distro. And as you can see, every distribution is its little world, including both the base system and the applications. What's wrong with that? Well, nothing, as long as you stay in your own world, this model works very well if you use what comes with your distribution. But there are some complications and I want to just mention a few of them. What if you want to test a latest bleeding edge version of some software that does not come with your distribution? And you also don't want to risk to break your system, of course. What do you do? Well, usually you go to some download page and if you're lucky you will find packages for your system that you're running. But in this case I do see three distributions, but mine doesn't happen to be among them. So what do I do now? Two, now maybe I have found this application. I have installed it on my system and now I decide I want to try out a new distribution, want to try out a new system. What do I do with the applications? Well, basically I have to download everything again, do it all over again. Having a business background, I think time is really money and a simple switch of your base system shouldn't require you to re-download all the software, to search everything again and install it again. So a distro switch shouldn't really require you to get all the applications again. Imagine a world where you download some music files of the NetMP3 files and as soon as you download them and as soon you start using them, they get melted into the system and are becoming part of the system. Clearly no one wants that. A music file just runs on top of the system and you can move it around as you like and I believe software should be the same. Maybe you want to use OpenOffice, the stable version, but you see this great next version upcoming and you want to give it a try. Without risking that something breaks. You want to run both in parallel. Well, if I want to do that currently, I probably have to ask an expert. What does he tell me? Something like this, which I, being a rather non-technical person, don't necessarily understand, but I sure can type it and look what happens and usually it looks like this. Some obscure errors doesn't work for me most of the time. Or maybe you want to use an application on a USB thumb drive. As a matter of fact, I'm running around with lots of these. Everyone of these has a different distribution on it and I want to just put some add-on software to the thumb drive. Currently, not so easy. Well, Windows can do it with some help. It's called portable applications and it's getting quite an uptake at the moment. If we look at macOS, it can do it out of the box most of the time. And Linux, good luck. Or, this is the classic problem. What do I do if I want to move an application to the trash? Uninstall it. What do I do? No, I mean completely. Including all the sillions of dependencies and libraries and other stuff that got installed together with it. And we'll do questions in just a few minutes. Well, if you did not use what was part of your distribution, if you went outside of the package management of your distribution and you installed just some random stuff off the net, you're pretty much out of luck at this point. And you can never know where the stuff went and how to get rid of it, at least if you're not a very technical person. Okay, so maybe this approach here where the distribution contains both the base system and the application in one block is maybe not the greatest model of all times, at least not for all users and for all users. So, here's a suggestion we would like to make. We really think it should look like this. You have different base systems and the applications run on top of those. The base system, the distribution being centrally managed by whatever tools the distribution provides. This is basic libraries, stuff that the end user typically doesn't care about as long as it runs. And it should be managed just like it is managed now by the distribution. On top of that comes the end user software. And we think the most important aspect of this is the clear separation between the two. So, you have the base system, but on top of that you have the end user applications which should be even easier to manage by the user himself. And the users should also be able to manually do something with these apps. So, this is really the critical part of what click really is in essence. It's all about the very clear separation between the base system, which we don't touch, which comes with your distro, and the applications we put on top of it using click. We call this application virtualization. Maybe you're familiar with other virtualization approaches. Usually an entire operating system is virtualized. This is not what we are doing. We just virtualize the file system of one single application and we can virtually run it at native speeds. So, there is no large downside to this. Well, a traditional package, if I install an application, looks somewhat like this. It puts files all over the place. As a matter of fact, package managers were invented because no one could track these anymore. I don't know where the stuff ends up. By the way, something similar is known as DLL hell to Windows users. Click does it very differently. We have the base system, the platform, and we just put in one single file per application. One red dot contains all this application needs to run. So, this is really the most central aspect of this talk and of click. Please, if you want to remember just one thing, this is the thing to remember. One application equals one file. This is what we are all about. By now, you probably ask yourself, what are they doing about the dependencies? Well, the dependencies are either part of the base system, that is, they come already with your distribution, or they must become part of that one file that we create. So, I think we have some lag in the animations. Now, maybe it helps to restart OpenOffice. This is a bug we discovered while working with all these animations. Somehow, you need to restart it. Okay, once again, so everything that's needed to run the application is either already there or we put it into that single file. Let's hope that we can recover here. Here we are. So, the dependencies. This scheme has some advantages. I want to go very quickly through these. First, obviously, one file, one application, is a very simple concept. Even new users can understand it immediately. Very simple. Second, when I have an application in this one file form, I can easily take it and put it on another distribution. It still runs. So, no re-downloading of the same stuff over and over again if you're using more than one distro. Third, multiple versions of applications in parallel. It's absolutely no problem to have five different versions of the same program running without requiring the user to do anything special. You can also take the one file, put it on a USB thumb drive, and it just happily runs from there. So, no need to copy it over and copy it back or something like this. Just run it right from the USB thumb drive. And finally, if you want to get rid of the application, you simply take the one file and put it into the trash, and it's all gone. I hope nobody uses trash. There are some more. You can't mess up the base system since we don't touch it. The scheme actually saves disk space. This is the question that comes up all the time. Isn't this adding to file sizes in an insane way? No, it is not. Since we are using a compressed file format, and this oftentimes offset the additional libraries and stuff that we have to use. To give you one concrete example, if we take OpenOffice, it's about 250 plus MB if you install it regularly, and if you do it with click, it comes down to 120 or 130 or so. So, no, it's not a waste of disk space. Of course, such a scheme also lowers the cost for independent software vendors who now have to produce just one package instead of multiple packages for all the distros out there. This is a cool one. You can also run applications in a jail. That means that everything the applications tries to write into your system gets jailed and gets redirected so you can take the settings along with the application. So I can't just put Firefox on my USB drive, but I can also take all the bookmarks and settings and everything with me. And finally, this is perhaps the most important one, no root writes required. And there are some more. You can figure them out if you try click. So, maybe we do a short demonstration. So, we type in here, click, and we take late three as a sample application, just press the run button, and now we clicked a bit too fast. Okay, do it once again. So, click, play three, press run. I now get some information including the download size and if I want a description and other details, I can look at what goes into this package. But for now I think I just leave it there and I just press OK. Now it's downloading. And boom, the application runs. It's really that simple. Now notice, okay, we are quitting it now. It asks for some feedback. Since we are providing packages for so many different distributions and systems, we are interested in where it runs. So, after running an app, you get a feedback dialogue. Notice this new icon on your desktop. It is not just an application launcher icon, but in fact it contains everything. It's the single file we talked about. In this case, 2.1 megabytes, so not too large. Now notice what did not happen. I did not add any repositories. I did not edit any cryptic config files or such. I did not need root writes. I did not have to enter any password. And I did not change my base system in any way. Implementation. Let's talk about how this stuff works. The single file you just saw is basically a standard ISO file like you use it for your CD images and such. And it contains in a compressed form everything the application needs. That's not part of the base system. This one file, you don't download it from the web as a complete file. No, instead, it's created locally on your machine using what we call a recipe file. This is a simple XML file plus some binary ingredients which can be Debian files or RPM files or any package binaries that are already out there, already existing. And what the click client then does, it takes the recipe, it takes the ingredients and it cooks up this red file on your machine. Now, such a recipe, what does it do? What does it say? Go to the Skype website, look up what's the most recent version of Skype, take it, download it and make this single file out of it. This is the clear text of a recipe. Not using pre-produced files has a lot of advantages. For example, we can use commercial applications that are not easily redistributed. We can just use whatever binaries are already provided by the company themselves. We don't redistribute anything, basically. Also, we can use the existing mirror infrastructure. We don't need special download servers for click. Instead, we use the packages and the mirrors that are already working and are already out there. This also allows for instant updates. Whenever Skype updates their homepage with a new version, it's there instantly. Also, we believe using this scheme increases trust because you can exactly see where your binaries are coming from and that they are coming from the original author, a company that may be trust or an open source project you trust. We developed this XML format together with zero install and we share the basic structure of this file so that in the future we can easily cooperate with the two projects and create either what zero installed us or what I just explained, the single file. And you can also do it. Here is an example. This is the XML of a recipe file. And as a matter of fact, you have just to fill in some very basic information such as name of the application, a little summary, the download URL, and you're basically done. Here we have done it for you. And of course, you can add to that recipe. For example, we can have a digital signature and some other things. Let's do a quick real-time demonstration. So here I want to package a little game called Xphere. All information I need is basically the name of the application plus the URL of some binary package that already exists. In this case, I want to use a binary package that's already on my disk. And here I have a blank minimal recipe. Now I will just insert the missing information, so the name of the application goes here. We can add some little description. It goes here. And I take this URL. In this case, it's just one file, but of course, if it's a more complex application, you could have more than one file. I have now saved this, and I will call click and give it the path that we have here for STEM recipe XML. And it asks me whether I want to download, in this case, copy it over and run this recipe. Of course, the description is empty, but if we look into the ingredients here, we can see the path I just entered. Now, let's run this, and here we go. On the fly, the click client converted the stab file that I put into the single file. And as you can see, it's really easy to do your own recipe files. Anyone knows how to defeat XFIL? We don't have the time to play the game right now, but he's giving some feedback now so that we know it worked on Ubuntu. If we have a good network connection, it might work, but we don't wait for this. Let's go back to the presentation. So, who writes all these thousands and thousands of recipes if we want to provide a reasonable broad range of open-source software? We don't have the time to do it, so we use something that already exists, apt-get, but we use it in a different way. We don't use it on your local machine. We use it on the server, and the server calculates all the dependencies and writes these little recipes. Here is an example, KDE SVN, a bit more complicated. All the dependency resolving is done server-side, so without even touching the recipe manually, we get a working recipe that's calculated to a common denominator of what we expect to be part of every system on the desktop out there. So, once again, recipe plus ingredients make the one file, the compressed application image. And this compressed application image actually looks like the standard Unix file system structure. We don't want to reinvent the wheel. We are using the structure that's already there. We just apply it on a different abstraction layer, a different level, that is not per system, but per application. Now, if I want to run an application, what happens is, here is my base system, and here comes my application. It's being mounted. The image is being mounted transparently over the base system so that the application sees the base system plus everything that's contained in this single file. And by the way, we do this on a per application level, so if you have two applications installed by click, they don't see each other. Only what is contained within this image sees its image, so they can't interfere with each other. Fuse does this for us. Fuse stands for file system in the user space, and it allows us to mount these application images without needing root rights. First of all, we mount this ISO in user land, then we also mirror the entire base system into the same mount point. Finally, we overlay what's contained in the image again to the same mount point, and then we switch there with a fake change root so that to the application, the result looks like a system that not only has the base system but also everything that was part of the one file image. So, how portable is this? We're using Fuse, and Fuse is really available for most Unix-like systems. We currently only implemented click for Linux, but it's well possible that someone comes up and says, hey, I want to have it on one of these, so it should be possible. There are also some user space programs and kernel modules involved, but since I'm not talking about Fuse, I don't want to go into the detail there. Next, we use a tool called fake change root, and this really is a user land change root implementation that makes it possible using LD preload for the application to see the entire system as a root file system when in fact it's not, when in fact it's mounted somewhere under temp something, but we do a switch there, and so the application thinks that's the main root file system. We do not emulate some special files such as DEF and so on for security reasons. We can also rewrite pairs, right, that I already explained, no, we can also rewrite pairs for writing so that as soon as an application wants to write somewhere, we redirect the write so that we can put it somewhere else, and this is actually cool, I want to show a demo. Quotes. What do we have here? So, here is our application, I just started it, a new file system that's mounted, Fuse ISO, it's now mounted, and as soon as Quote closes the application again, the file system disappears. So, on the fly, real-time mounting, here we go, it's mounted again, we close the game and the file system is gone. Yeah, something is still missing for a full application virtualization. We have the base system, the read-only stuff, but now we want to do something about the user data, the read-write stuff, and that's a writeable layer. We put it on top, and this re-writeable layer takes anything the application wants to write and redirects it to a directory called app-data, in this case. Fusion ISO does it for us. Again, this is the same procedure as before. I don't have to go through everything again, but the critical part is that now we mount a second layer, read-write, which then redirects all the writes. It's based on Fuse ISO. We added some functionality to it. I don't want to go too much into the detail here. Instead, I think we have another demonstration. Now, right, if I now launch an application, I can choose whether I want no-chale at all, that's what we showed before, whether I want to redirect the writes to the entire system, to the home directory, or just to the configuration files. A little demo here. Here I have an application, leafpad, and I create a folder that has the exact same name as the application, but ends in .data. Now, when I launch this application, I can type something in, oh, here. The application started in .chale. This just shows me that writes are redirected now. We put some text in, and now try to save it to, let's say, the root directory. And remember, we are not root. We are just a regular user, so we normally couldn't write there. So it's now saved. Let's close it, and actually the file went to this directory. It's not in the root directory, but in our directory here. So we open the application again, go to Open, and sure enough, there it is, in the root directory. Here's our file. So, a lot of people ask about desktop menu integration. We didn't do that in click version one, but the upcoming click two will register downloaded files with your desktop so that they appear in the desktop menus if you wish so. Here's a component called clickD, which is a little demon, very low overhead that runs in the background and checks if you add some CMG files, some compressed applications, and if they are there, the menu gets updated. It uses these standard desktop files and integrates the application into the desktop. What about command line applications? Well, they work right as you expect them to work. As a matter of fact, you can take this application image, rename it, and put it into somewhere on path, user, local, bin, for example, and then you can just launch the application as you're expecting it, and you don't even notice that, in fact, it's working in a totally different way. Here are some challenges. Problem. Which binaries should we take that run everywhere? This is really a tough question since if you take a binary that was compiled on a new operating system and you want to run that on an older operating system, usually it doesn't work. So what we really would like to have is back ports, recent software compiled on old systems, because these binaries would be ideally portable across all the different distros out there, even if they are not the most recent ones. Another problem. If you try out Plik, you will notice most apps will work as expected, but some won't. And this is simply because most recipes are automatically generated, which usually works quite well, but not in every instance. So if you encounter that, you're welcome to fix the recipe. It's usually just one line, what you have to adjust and then it should run. Another problem. I hope it's not a problem anymore. We made quite some progress on this one today, together with the guys from the OpenSUSE Build Service. We are working on providing nice packages for all the commonly used distributions so that it's really easy to get the client itself running, which you need not only to get your files, but also to run them. Of course, this project is not just ours. We took a lot of inspiration, a lot of ideas and code, from a lot of projects and people, and we want to thank some of them. It starts by Apple, who first, at least to my knowledge, puts software in disk images. It goes through ISO. Knoppix was a very important inspiration. They were the first one who made it popular to run an entire system of an ISO. We now take this to the application level. There are some others. Probably most of you will recognize them. So it's not really our invention. We just took a lot of good ideas that were already out there. And with this, we would like to invite you to get involved in the project. This is a great team effort. We have various people working on different aspects, but some spots are still open. So we are looking for someone to create a nice native KDE frontend, for example, or RPM packages. We made some progress on this today, but if someone thinks he can produce great RPMs, please come and see us. Also, of course, we need lots of recipe, writers, and quality control. So if you want to join the project, you're welcome to do so. With this, we are now at the Q&A session, and I think there was one question up front here. Long time ago, Mike is coming. Hello. Okay. Basically, what you're proposing is a glorified version of static linking. Static linking and dynamic linking is a debate we've been having for decades. What you've also got is it's great on your machine there, but if you take, say, an LPC that's got a little bit of RAM, a little bit of disk space, not an awful lot else, you can't really afford to have 600 copies of G-Lib and GTK that you would have if you installed, say, some of the known stuff. You're also running to the problem of if you don't have all the memory there, because you're going to end up with 20 copies of GTK loaded. And then you've also got the great problem of what happens when I take it out of my 32-bit laptop and plug it into an ARM machine or a SunSpark or something. Okay, so this is three questions, really. The first one was about how does it relate to static old linking? That's not what we are doing. It's compiled in the standard way using shared libraries. All we do is we bundle those into one file that is mounted and as soon as it is mounted it behaves just like regularly installed libraries. So it does not mean that you have to link everything in a static way. But you then get all the downsides of static linking and dynamic linking. I've got Firefox and Mozilla. Both of them seem to depend on libnss, I think it is. So I don't have to have two copies. This is true, and it relates somewhat to your second question, which libraries end up in the compressed files. Is it really GTK over and over again? And that's not the case. We do not include any basic system libraries that are shared by a lot of applications. We just include those libraries that typically just one application or a few applications need. So GTK we would expect to be present in the base system. Not every machine has GTK on it. That's correct. Then you cannot run GTK applications unless you use your distros package manager to install first GTK and then click the application. But click will tell you what to do. So you've got the same dependency hell as before? No. We take it one abstraction level higher. We do not check dependencies on a individual file or on a package level. Instead we check it on a framework level. So we ask ourselves, do we have GNOME? Do we have KDE? Do we have MANO? But on this aggregated level. Questions? And the last one of portability between platforms. Great point. Since as you saw, we thought about these issues. We haven't implemented anything like it yet. But principally you could do two things. You could create specific images for every operating system. But that would not be so nice, of course. You can also put all the different architectures and operating system binaries into the same ISO. And this would then somewhat be similar to a Fed binary known from Apple. And we can do that. It's not implemented yet, but we think clearly in this direction. Okay, you can think about click also as a tool in your hands where you can build your own portable applications and bundle in it what you want. Like, you remember now the one application one file thing. What I've done is five applications into one file as well. Just click will have something where you can run it with a prompt and it asks you which one of the ones which are included will... Do you want to run? Like, for instance, if you want to package K-office but have access to the individual applications or similar things. There was a question. Well, I believe if I have one application like him or something and I change a photo. Then it's in the overlay of him, right? So if you have... Like local data, the application data which are in the right well portion of the image. Like I say, I'm going to use him. Then they are saved in the image. So they're not visible from other applications. Is that right? The application that runs from this image will see everything that's already in the base system. So if you have a GIMP version already locally installed and now add another version using click it will overlay. So if the same version exists in the ISO, it will take this. If it doesn't exist, it takes what is in the base system. If I want to use one file from two different programs I should make sure the file is in the base system and not in the application data. That's true. That would be ideal but usually we, as I said we follow the one app, one file approach. So there you just have to make sure that in writing your recipe you assume the right things to be on every distribution and this is a lot of magic actually. We are following roughly the LSB for that and we get on frameworks such as Mano which just require them to be installed in case you want to do a Mano application. Another question. I once used click one the first version and it worked fine for me. So why did you come up with a second version? What was the reason? Sorry, I didn't get the question. What was the reason for a second version? Oh, okay. Click one was basically a proof of concept. What we did in click one was we did not overlay another layer. We just took the binaries and patched any hard-coded paths they had so that they could run from any location. This worked in most times but it did not work in others and so this is a much, much cleaner implementation. Next question. You talked about optimization because of compressed ISO so you don't use that much HDD space. But what about the overhead of decompressing all those images and eventually writing back to it? That's CPU overhead. So you're interested in the overhead we produce? Yeah, basically. Because HDD space is cheaper and cheaper nowadays and sometimes it's better with life, basically. Well, we haven't done really scientific testing on the overhead we are producing but from day-to-day usage it seems that the overhead is not noticeable in the real world. One more thing what you ask the differences between click one and click two you can read a lot of details in the interview the POSTIM organizers did with us and published on the website where we go into the differences between click one and click two. Okay, any more? Something that is really a problem on macOS X by example, the click system is really based on the same principles is package management and software updates using a normal central packaging system you update your whole system and the thing I think really awful on the mac is you have to update every package manually how do you think click could solve this problem because it's also security related So automatic updates work best for the applications you just be updated for stuff in the base system core libraries and so on but I'm not so sure whether the user actually wants automatic updates of Firefox or OpenOffice most of the time the user wants first to check out whether the new version really does what was expected and that's why in click we want the user in charge of the add-on applications that being said it would be possible to create a updater application that checks with the web server, the click server which application is updated and then would automatically download it that would be possible, we haven't implemented it yet because we think it's so simple just one file that you manually can manage that you don't really need a full-fledged package system, but it would be possible here you have said that the ESL LD preload to create your user level change route but LD preload is famous because it's easily circumventable because it's just an environment variable I think this does not protect your user from malicious software we didn't say it's circumventable we didn't say it is not circumventable so yes, if you are using a package you could package something which does evil to your system but that you can do with RPM or depth as well on the other side you use fuse in the kernel which is not really user level because you need to add the possibility to use the device changes the real mount table of the machine trying to use click in a multi-user environment where tens of users try to install by click their application you end up with the mount table with hundreds of items I think there are more modern techniques like system call system call capturing like VOS or like use of Linux so you can take the system call that access the file system and you can create a virtual mount and that's and a virtual, the same you can do for CH route in such a way is this not circumventable because you don't the kernel we've designed it in a modular way so that if you have a concrete idea on how to do that we should talk and we would then implement it we would just exchange this change route environment by what you propose okay, no question about code here so you could probably install and use the whole thing in a virtual environment with no machine, right? with no machine no machine? he knows so my good question is so why don't we give every pupil out there his own no machine instance and teach them how to use click we don't have all time of the world so we have to do something but if you want to tackle that you are invited to help us I would definitely try to tackle that so I had any teachers in here who would go for a whole instance for every pupil here at school any teachers here? no, wrong conference I'm sorry, but I try okay so thank you very much for coming, we are still here if you want to discuss some more about click we'll be outside thank you