 to enable exploring virtual disks using the five virtual disks. Let me start by explaining why we need this. So we all know the basics of virtualization. We take physical lists, and we deploy our work and our applications in virtual machines that run on top. And by now, we're pretty well aware of the benefits of virtualization. Just to name a few, the additional isolation level that we get from virtual machines provides us with better security. We can emulate operating systems that are different than the base operating system that is installed on the host. It makes our applications easier to back up and move across physical hosts, and so on. When we think about those benefits, we see that they stem from the additional obstruction level that virtual machines provide on top of the base operating system. So this obstruction is generally good, but it comes with a cost. And I would like to focus on a specific one. How can we inject files from the physical host to the virtual machine, and vice versa? How can we extract files from the virtual machine to the physical host? Let's consider two different cases. One, we have a running virtual machine, and let's say that we want to copy a file from within the guest to the physical host. So I try to think about the possible solutions that are available today. Some of them are internet-based. I can upload the file from the virtual machine using my email. If the file is small enough, for larger files, I can use some file-sharing services. For larger files, I can use file-sharing services, such as blog docs, movement, and so on. The problem with these kind of solutions is that they are based on uploading via the internet, and therefore, they are relatively slow. Alternatively, I can use SCP or set an edifest share on the physical host and mount it into the virtual machine. That way, all the transmissions are over the land, and therefore, it is faster. But on the other hand, it is more complex for the end users. And lastly, I can use guest agent-based solution. Some vendors allow to share a file from the physical host with the virtual machine. Some others run some service within the virtual machine that exposes its file to the outside world. The problem with this kind of solution is that they are not general enough. They are vendor-specific. And now the second case. Let's say that I have just the virtual disk, the file. So what will be our options this time? Disk is not bootable. Then we can start a virtual machine and attach the disk to that virtual machine. Then we can access this content all within the guest. If the disk is bootable, we can do the same, or we can simply boot the disk for the VM from the disk. In both cases, we need to make additional steps, and then we fall back to the previous problem of the running VM. So these solutions, both of these solutions are slower and more complicated than the one I mentioned before. And specifically, the case of booting the VM from the disk notes that no other process can write to the disk at the same time because the VM opens the disk with right permissions. The alternative solution I would like to talk about is about combining, integrating two projects, new commander and libgstFS, to produce a file manager that we can browse and modify virtual disks with. Let's talk a bit about those tools. So libgstFS is a set of tools for accessing and modifying the virtual machine disk images. It supports almost every format of virtual disk that is available to the DMDK queue current, so it can access remote disk images. It operates in a secure way. It can access proprietary systems like Hyper-V and the email, which is important for V2V process for migrating VMs for the value of one window to open another environment of a different network. All those capabilities and more are available via command line tips, such as guest vision build rescue. There is no integrated user interface in libgstFS, which brings us to new commander. That's what users say about new commander. It is when I've been involved in new commander for more than 10 years now, some of them as a contributor, and some of them as a vendor for the job. So I definitely agree with that. Seriously, it's a lot easier. New commander is a file manager with a dual-pane interface, such as the total commander, normal commander, midnight commander, and so on. But what makes new commander really special is the fact that it is cross-platform. It is written in German, and therefore is one of every operating systems for the super-host job. That's how the new commander looks. We see that you are trying and an analog for connecting to an S&P server. We'll see more of that in the straight-up. New commander is really a feature-packed application that supports various archive forms, such as S&P, archive forms, such as V, so various protocols, such as S&P, S&P, S&P, and so all the common file versions, three main moves, what is going to be done, really easy to see the new commander. Now I want to demonstrate to you that the integration can actually work and what we can get out of it. So I have a virtual machine that is installed with Fedora. I log into that VM. And I'll browse to my home folder. Inside my home folder, I have a folder called FOSDAM that includes a text file with the following content. FOSDAM to 2019. Now let's say that I want to copy this file from the guest to the host machine, to my laptop. So using the new commander, I can go to the disk image. By default, the disk images reside in a valid live images when using virt manager. It takes a bit time to query the data. I will speak about it a bit later, but eventually we get it. We can enter the ETC folder. Let me enter my home folder and specifically the FOSDAM folder inside. And we see the text file. Let me create a folder inside my TMP folder on the laptop. Let's call it demo. And I can, with a single click, copy the file from the disk to my machine. Now let's open the local file and you see the same content. Now I will close new commander and I'll shut down the VM. You may wonder how come that new commander managed to access the file while the VM booted from this disk. So the answer to that is here. You see this part? This part, new commander provides live guest FES parameters that specify that the disk should be opened in read-only mode. So let's change it and I'll start new commander again. This time we will be able to write to the disk. So I'll open it again. Note that in this demo I use a disk of QCao2 format but every format that is supported by live guest FES should work the same. So I have the content and let's go to the local file that we downloaded before and let's add some text to it. Now I'll go to my home folder. I'll create another folder called FOSDEM2 and I'll copy the modified file to the disk. Now I'll show you that this file actually is actually accessible from within the guest with the modified content. I'll go to my home folder and we see the new folder FOSDEM2 that includes the text file. I'll open it and we see also the add some text I just, yeah. Okay, so that's what I wanted to demonstrate. And now that we saw that this can actually work and what we can get from this integration I would like to speak about the more interesting stuff that are related to the design and implementation of this feature. Specifically I want to talk about three things. It was not as trivial as it may sound to model virtual disks in Mu Commander. And there are two projects involved with different API and there were some conflicts between them that had to be bridged. And lastly, this integration is different than all the extensions that are currently supported in Mu Commander which makes it a bit more difficult to ship it to users and we'll see why. So let's start with the modeling part. In Mu Commander we have three types of files. Archive files, those are files in the inside file systems that contain files and folders inside. There are protocol files, those are remote file system that you need to connect to, generally authenticate and connect to and local files, those are all the other files that reside on my local machine. Obviously the best fit for virtual disks is archive files, right? But in practice the fact that virtual disks are relatively large files means that we want to query them in a lazy way. And in Mu Commander archive files, so in Mu Commander only protocol files are queried in a lazy way. So eventually I did map virtual disks as two archive files but the query thingy was one of the gaps that had to be bridged. So let's start with it. Besides the need to query large files in a lazy way, I'll actually say it differently. Mu Commander used to query the entire content of archive files when querying files. But with virtual disks, not only that they are large and we cannot do that, also the API of LibGestFS doesn't support such querying. LibGestFS supports a query only particular folders within the virtual disk. And therefore that was one of the things that I needed to overcome. The way I overcome this, it is kind of a hack but it was good for this demonstration, this POC was by introducing code that walks over the entire structure of the disk. I use the visitor pattern specifically, but ideally we need to come up with a better solution for that, which probably mean to change Mu Commander to query all files, not only protocol files but also archive files in a lazy way. The second get I want to present to talk about is related to the way Mu Commander reads and writes files. Mu Commander abstracts all the files using streams and that allows Mu Commander to do some cool stuff. Let's say that I want to copy a file from an SFTP server to an SMB folder. Mu Commander can hold two streams to both source and destination, read the packet from the source and immediately transmit it to the destination. That way, files are copied without being persisted anywhere in the middle. Unfortunately, the API of LibGestFS doesn't support streams, specifically the API for Java. They only supports local files and standard input output. So again, I implemented some hack and I used temporary files. Let me explain it by example. Let's say that I want to copy a file to the virtual machine, then I read the packet from the source stream, from the stream that is connected to the source, save it, write it to a local temporary file and then ask LibGestFS to copy from the temporary file to the destination within the virtual disk. This is not that efficient, right? This is another step but it is not that bad because the temporary files are local files. Ideally, we should change the binding of LibGestFS to support streams. That will solve that problem. And now about chipment. Usually, in Mu Commander, we take the Java clients that are provided by other projects and integrate them into Mu Commander. In LibGestFS, there is no Java client. There is a Java binding that includes also the Java products that we needed at compilation time. This raises two difficulties. One, we obviously don't want to duplicate the Java file to Mu Commander because it should already be accessible on the host. And second, LibGestFS Java might be missing on the host. I plan to overcome this problem by introducing this functionality as a plugin that will be installed separately on the host and it will declare LibGestFS Java and specifically the version that we test the plugin with as a dependency. That's the plan. And now let's talk about some open questions. So we saw in the demonstration that it takes some time to list the content of the disk. That's because it is shown from the guest level view as if the user has saved the guest and list the content of the disk. This requires LibGestFS to inspect the operating system within the disk which is a relatively slow operation. Not only that, it also requires the disk to be installed with an operating system and in case the virtual machine is set with multiple disks to have all the disks together in order to ensure that we can produce this view. Alternatively, we can provide file system view, list all the file systems and list the content of each of them separately. This will not, that way we will not suffer from the drawbacks I mentioned before, but it is harder for the end users because users don't necessarily always know on which file system the file they are looking for resides. They know where it resides only from the guest point of view. So my current plan is to combine those two views to provide both of them to the user and let the user decide which one he wants to use. Secondly, there's a question of whether to interact with LibVirt. Not only that LibVirt can provide us all the virtual disks that the VM is set with which is needed for the guest level view but it can also tell us what is the status of the VM whether it is running or not and by that we can figure out if we can open the disk for writing or we have to open it in read only mode but that introduces additional complexity so we'll need to consider that. And lastly, about caching. New commander is used to cache the content of the queried files and to update their content only when the modification date of the file is changed but in practice I see that with virtual disks the modification date of the file is changed regardless of actual changes in the content of the disk so we will need to come up with a different solution for caching. Now the functionality I present here is part of a broader view and I want to talk in few words about the vision. I want to reach a state where new commander is a truly pluggable file manager. Let's say that we will turn new commander into a core part and all the extensions will be implemented as plugin so some of them will provide the existing functionality some will provide the support for virtual disks and there are many more candidates such as Dropbox, Google Drive, even Overt listing the storage domains inside Overt and many others. The fact that new commander is written in Java allows us to leverage Java clients that are typically are provided by different projects and the fact that new commander is cross platform makes it really attractive to other projects because they can reach broader audience, right? Now a word about the roadmap this functionality is planned to be shipped in the next version of new commander 0.9.4. Past experience shows that I'm able to release a new version of new commander once a year. The last version, I released three updates so because I expect the next one to be complex and it will take time but hopefully I'll be able to ship it later this year. To sum up, in this session I present an integration between two projects that produce a file manager that we can browse and modify virtual disks with. It comes in the form of a plugin for new commander that uses libgstfs behind the scenes. In a way, this integration can provide the user interface that is missing for libgstfs. This, as you might have noticed is really in its early phase, this development. So I published the POC in November, discussed it on the libgstfs mailing list. And really if you have ideas or you want to contribute either code documentation in whatever way you want, I really encourage you to do that. All feedback is welcome. You can join our group at Gitter and that's our Gitter page of new commander and both website of the two tools that are involved. That's all from my side and now I'll be happy to take questions. Safe integration? Safe? I'll repeat the question. The question was whether we consider integration with safe. Yeah, why not? I mean, once we have a good plugin architecture there's no reason not to support safe and other formats and protocols, yes. Hopeful, I mean it would be best if it will be contributed by the community. That would be ideal. Yes. So for LVM based virtual machines that might have their PV spread across multiple disk files, is that something that's working via the guest view? Yes, yes. So the question was about when we use LVM that are set above on top of physical volumes that reside on different disk, on different files. So yeah, that's why we need all the disks to be provided and then LibGest to first do the job for us and create that, combine that and produce the guest level view. And that's some of the motivation to use LibVert to be able to provide all the input that is needed. More questions? Yes. There's also the way of using plan manifest and work manager. Sorry, I didn't follow. You can also use plan manifest and work manager. 9FS? Exchanges files between those and I guess the main PFS. 9PFS. I don't know what is that. Another solution for the relation between the college names and extra use, 9PFS and 3PFS. Ah, okay. But. We're going to take a basis. This one is for the recording. Yes, so I didn't get the name though, so if you can repeat the name of it. 9PFS. So the question is whether about 9PFS that provides an alternative way to share a file between the host and the virtual machine. So I don't know that particular one, but I guess, I mean, you say it is provided by Virt Manager. What if we use a different, like not Virt Manager, but OVirt for example. So this is, I consider this solution part of the vendor specific stuff, right? I want to. It's your base, so it's almost coming with a KVM out of the box. Okay. And it's pretty fast. Okay, so if you add the support in OVirt, so let's say, okay, it will be provided for KVM, but then what about other hypervisors? I mean. Okay, so time's up. So thank you all for attending.