 I know that's difficult to pronounce. I usually go just with the first name, Slava. I started to use Linux when I was still in high school and since then I've worked in different industries, aviation, a bit of a telecommunication, metal industry. And that allowed me to contribute to various open source projects. Linux kernel, trusted firmware, ARM bootloader, and also Yocta project. And now I contributed the overlay FS classes to open embedded core and I try to maintain them. So before we start what the overlay FS is and what it helps us to do, we better understand the problem we are trying to fix here. And then we go into details how overlay FS works and how we can use it in Yocta. So imagine your embedded device. So what you usually do when you design it, you usually have some partition layout over there. It can be just the whole partition, the whole root FS takes up the whole space or you can have a root FS partition and then some application data partition. That's just an example. Usually in those two cases, you would have the update mechanism like OS tree or SW update or I also saw some devices that just use a homegrown update mechanism based on Squash FS images. Or we had heard a lot of talks during the week about AB update scheme where you have one root FS partition currently active and the other is on standby. You could also have the same layout and some application data partition. In that case, we saw some, we heard some talks during the week about a software update, rogue, a Mender, which are used in that configuration. That could be others. That's just an example. That also could be other partition layouts. Now, usually when you have your device, you also have some applications. The purpose of the device is that it runs something and it's supposed to do something. And it's not always that you develop your application from scratch. Sometimes you develop a next generation of your device, so the application is already there. And there are also some services which your device provides, for example, like DNS service or something else. And your application or service, it always has a writable access to either root FS or some configurable partition. In a lot of cases, it just writes the root FS and that could be a problem with the AB update scheme because when you update, you update the whole root FS and the whole user data is gone. So what can we do here? The simple case and the best case actually is that you make your application configurable where it writes to. So it doesn't write to the root FS but it just writes to your data partition or TMP FS if you don't care about the data. But in a lot of cases, it's really not possible to change the application because it's a huge legacy beast or it requires a lot of time and effort, a lot of hard-coded parts in the code. So what can we do here then? The simple case, we can create a sim link which will point then either to a TMP FS or to the application data partition. That works for simple cases. If it's just one or two sim links, that can be manageable. What if there are like dozen places your application writes to? Then it becomes more difficult. The other approach could be a bind mount. So you bind mount the directory which points again either to TMP FS or to your data partition and bind and mount back to the root FS. That also solves the problem. But in some cases and a lot of cases, the path your application writes to already has some predefined configuration and you want your application to keep that configuration. In that case, all the FS can be handy. What it does, I explained in a few slides, then your data which comes from the root FS is available and it can write but the writing will be redirected either to application data or TMP FS. While the application still thinks it writes to the root FS. So what over the FS is? So that's the excerpt from the Wikipedia. When I first read it, it's an implementation of a union mount file system. Like, okay, what is a union mount? And I went further, so union mount is a combination of multiple directories which appear as one. So how does it look like? Imagine you have a directory one with files one and two and also five and directory two with files three and five. When you mount an overlay, you look at it like from the top. So you would see all files present as in one directory while underneath there are two different directories. Before I go with an example, how to achieve that in future monology. So in overlay FS, the lower layer or lower directory tree is the one that appears below. So in this case, that would be directory one here would be considered for overlay FS as a lower layer. It can be read only, can be read write two but the writing will not go there. The upper layer or upper directory is a directory which appears on top of that. So that's directory two. These can be writable. So, and that's what's interesting in our case if you want to redirect the writings of our application to another partition. The limitation for directory two that it has to support extended system attributes. So in that case that NFS is actually not supported because it does not support it. And then we have the mounted overlay or merged layer, so to say that will be this one. Another interesting term is writeouts or a back object. I will show it later how it's used. This is a file directory which is a marked on the upper layer as removed because we have a transparency that application can do whatever it wants in overlay. It can read, write and also can remove. And those modifications are stored on the upper layer. Now an example, that's our directory one with two files or three files and directory two. You see there's file five which is present on both directories and the content is different. So that's a file from directory one, file from directory two. So now the command to mount an overlay looks like that. You mount the file system type overlay and then you mount a special type overlay. So those are different things. That's the thing you mount that's a file system type. This is where you mount to directory three. And the whole configuration which we just discussed lower layer, upper layer that goes in the options. So lower layer would be directory one, lower tier, upper tier would be directory two. And then there's a special argument called work tier. You have to specify that in order for overlay to work. The limitation here that the work tier has to be on the same file system where directory two is. I just duplicated the mount command again just to see upper and lower tier. And that's how directory three would look like after we mounted. It would have all the file files and we see that content of file file is gonna be the one from the upper layer. Now that's a removal case. If we remove file from the overlay, we take for example file one and we see that it is still present on the lower layer. So lower layer is not touch here. But on the directory two, a special device is created with the major and minor number zeros. It's a special character device. And this is the write out. That's how it marked on the upper layer that the file is removed from an overlay. So you see here in the overlay, it's not displayed anymore. This is a bit more difficult example. Now we unmount the overlay and create a few sub-directories to see how the directories are handled. We create sub-director one and two in the lower layer and sub-director two in the upper layer. And if we mount it again, we see that they are present on the mounted overlay. But we have here two sub-directories. So right now they're empty. But if they had some files, the content of sub-directory two would be merged too. Now this is an important thing to note. I will mention it also later. If you wanna do any modifications on the layers of the overlay, it has to be done offline. So overlay FS does not support changes when the overlay is mounted. Well, it says that official documentation says that the behavior is undefined when you do that. It will not cause a crash. But if you want to have some modifications, you have to first unmount an overlay. And that's another test. We remove the sub-director one from mounted overlay and then we create it again. Please ignore this for now. So if we take a look at the directory two, that's an upper layer, with the command getfattro, this one with the parameter dash m dash, this should list all the extended attributes on the file system. So that's what we wanna do. That lists only the attributes. It does not list the values of those attributes. And we remove the sub-directory one and you see it got the attribute pack. That means if we now unmount an overlay and create anything in sub-directory one on the lower layer, it will not be displayed on the overlay. Because it was already removed from the overlay. So whatever is displayed in sub-directory one would be from the upper layer. So lower layer in this case is hidden. Now this test again, if we do that offline, we create that file. And this file on the overlay, the sub-directory two gets the attribute origin. That means the files from the lower layer will not be hidden. You see them here on the mounted overlay. Those attributes, I think they have some values. You can see them, but that's implementation-specific. If you add minus D option to dump the values of those extended attributes to see some value. But that's not really interesting for now to see how it's implemented. Yeah, now what do you need to do in the kernel? That's a simple. In your DevConfig and the Yocto, you just define oralFS yes. Or if you use the Linux Yocto, you can also use kernel features, which would be features oralFS and just oralFS SCC. That does exactly the same thing. Those are special features or oralFS. For the use case, I'm describing here, they're not really interesting. Even more, they should stay disabled. Because remember our use case when we wanna update the entire file system, AB update. And imagine that our application got a new config file in the mounted overlay. So that config file obviously goes under the lower layer when the file system is updated. And the upper layer would be our data partition. So the tricky part here is that this is allowed when all those features are off. And they should stay off. It's either you disable them in the kernel configuration or you provide mount options to explicitly mount an overlay with all of them are off. And this is, there was actually a bug in the kernel, regardless what was written in the documentation, it was still not possible to get those files after you update the system. They were not visible. That was fixed in kernel 5.15. The patch was accepted upstream. So if you use an older kernel, I think there were three patches. And you have to backport it to an older kernel. So that is actually excerpt from the documentation. Again, that's what I said. So if you do online modifications to the layer, their behavior is undefined. You might see that might not, it will not result in a crash or deadlock. But I wouldn't go into the undefined area here. Right, and how do we use it in Yocto? There are several ways. Depends on your system configuration. You have possibility to use it in InutrameFS. There's a recipe Volatile Binds and their classes OverlayFS and OverlayFS ETC. For the InutrameFS, the recipe is located in open-abended core, recipe score, init-rd-scripts. You have to include the package, InutrameFS module over the root. And then when you boot such kind of system, you have to provide the kernel argument, root Rw, which should point to your data partition. And that would mount an Overlay for you. It does not require image feature to be the only, but I think if you're using InutrameFS, most likely it is. The Volatile Binds. The recipe has been there all along and the name is actually a bit misleading because it doesn't mention any overlay. So what it does, you have to include, not include, you have to provide the BB append. And in your layer, you would just extend Volatile Binds variable with two parameters and the new line separator. That would be your upper layer, that would be your lower layer and at the same time, that would be amount point. So if you imagine the picture I showed before, that would be the director one, that would be director two and that at the same time would be merged overlay. So this you will still see as amounted, as a content of unamounted overlay, merged with the changes that your application writes and you're still your application writes back to the same location. If you use system D, Volatile Binds is already included implicitly in your image. And basically it's a BB append. So it works on a layer basis. If you enable a layer in your configuration, then mount points are created or if you disable it, not created. But there's a tricky part. So like I said, the recipe is called Volatile Binds. So originally it created only bind mounts. So how it works now, it tries to mount an overlay and if it's not successful, in case, I don't know, overlay is not enabled in the kernel. One example, then it rolls back to the null behavior. It copies everything from the original directory to the upper layer. It's not gonna be upper layer, it's gonna be bind mounted later back. That's the original behavior and you still can use it. But the downside, the default configuration works only when your root of S is read only. So the template for the system de-unit or for the implementation checks that file system is read only. If it's not, it just skips it. And yeah, that's just for me. You have to know it exists. So the name doesn't say anything, how to use it. And it's not really obvious when you wanna achieve something like that. Now regarding all the FS class. So I contributed, I think at the same time, like when Volatile Binds was extended with the overlay FS support, what you need to do is a bit more tricky, but it's more generic at the same time. In your recipe, in your application, you inherit the overlay FS class and you specify overlay FS writable path with the key. And this would be the list of your directories you want to mount as an overlay. Now, in your machine configuration, you would have overlay FS mount point with the same key. And this would be the mount point to your data partition. So why is it with the keys? So because you might have several overlays, you might have one data partition, you might have another overlay which points to TMP FS. For example, for the data, you don't care, but you still want to write your application to be transparent, to have transparent writing, to try to root FS. And then in your distro configuration, you just add the distro feature overlay FS. So what it does behind the scenes, it has a lot of QHX that you have this mount point already available, that you properly, if you, for example, define only this, but not this, you will get an error. There are a couple of unit tests already covered in open embedded core to make sure that works correctly. And it works on the recipe basis. So if you do not include your application recipe, which uses that class, then the overlay mount points will simply not be created. It works for system D only. I did not implement it for other init managers. And it works regardless if the lower layer is read only or read write. So that way you can have several testable configuration and make sure your overlays are always there. Another class is overlay FS EDC. I saw a lot of people actually ask for that feature because the possibility of mounted overlays on application basis is good, but you might have some services which want to write to EDC. And that is a bit tricky part because the first thing, which is done when the system boosts, it starts the system D or your init manager, and that takes ownership of the EDC. You cannot do anything after that. You cannot remount the EDC. What you need to do in your image recipe is add a feature over the FS EDC. And then in the machine configuration, you specify the mount point and the device. This is needed, of course, because it's one of the first things which is done after boot to mount your data partition. Most likely that's going to match if you use overlay FS class with those path going to match with that. And what's happening behind the scene, there's a template script which runs first thing after boot instead of your init manager. Then performs the mounting and remount EDC for the overlay. And then hands over back to your original init manager. Yeah, this is useful, but you also have to keep in mind. So if you do debugging, you need to make sure and keep in mind that all writes go to overlay and if you want to restore something, you could just unmount your overlays and restore the content. With EDC is not really possible. I recently contributed a fix that the original EDC content is also available for read-only purposes if you want to restore something. Now regarding more debugging techniques and useful utilities, there's this overlay effects prox repository on the GitHub, which provides FS check overlay utility. There's also a recipe for that in meta open embedded. I didn't work with that extensively. I think it checks the validity of the upper layer that all the attributes are written correctly. There is another repository which is called overlay FS tools. It provides more interesting utilities which you can use. And the recipe is also available on the meta open embedded. That's again a comment from me because you see the name seems to be the same tools or prox and actually the make file is almost the same. It just compiles in the same way. I try to contact the maintainers but they don't really respond. Maybe it makes sense to combine both projects and just have one tools which provide all of them. If you know how to do that, please feel free to contact me. Yeah, and there's another one which is called XFS tests that the name actually doesn't say that it's overlay FS related but it's hosted on the kernel Git repository. What it provides is provide a lot of file system tests. So it's a big QA test suite. If you find a bug in overlay FS, you would use that recipe and meta open embedded and that test suite has a lot of overlay FS related unit tests, so to say and which you could use if you find a bug in the kernel in the overlay FS implementation to check that everything still matches with the expected behavior. If you had a new feature, of course you would need to extend the test too and the patches and bug fixes should be sent to that mailing list address. Yeah, those are the articles and official documentation and those are two main maintainers of overlay FS. Yeah, I think that was it from me. Thank you very much. If you have any questions, maybe I missed something, forgot something. I think that's something that overlay FS does internally. So the question was what work directory, what is work directory for? Whenever I always check the contents, it was empty. So probably it's needed at runtime. So I don't have insights what is happening behind the scenes. Yeah, the source code is in Linux kernel, yeah. I'm pretty sure when the system D starts, it uses a lot of an ETC and you cannot remount it anymore. I didn't try, but I think that might fail. The question was what's special about ETC? Yep. I guess that counts as a modification to completely replace it. Yeah, so the question is how is ETC affected after the update, correct? So when you do the A-B update scheme, you update a standby partition and then after reboot, the lower layer basically is changed offline. It's not online. And whatever files you had on the upper layer, of course take precedence that will be your modified configuration. If you need to change something and take it from the new update, that has to be done manually and handled manually with some update hooks. For example, no, no. Because the changes are offline, you reboot the system to a new standby partition. So the question is, can you modify the upper directory? Online or offline? So offline, again, no problem, online. So like I said, behavior is undefined. You can try. Yeah, the question is back. As an upper layer. Yeah. So the question is, can you export NFS directory and use it as an upper layer, right? I think so, yeah. That's a good question. I haven't tried that. So that was not my use case because we are on an embedded device. I didn't need to export NFS here. I'll try it then. Yeah. Yeah, one more question. So when you, yeah. It's not correctly said on the or before the file system. And so that we need to perform something. I'm not sure I understood the question. So when you use extended features of all RFS, would that be problematic in that case? You mean that's a good question. So is there any mechanism for the overlay to be mounted properly? Right? I think there was a fix recently for the overlay FACTC class that now all the features switched off during mounting that I could contribute the same for the overlay FACTS class, but for the ETC, that's the case. So they have switched off already. Yeah, one more question. So is that a question or just comment, comment? Okay. So for me personally, the message is if you can modify your application, that would be the easiest way. If you can't, simply combine mounts. Otherwise you should use overlay FACTS, but you see there are a lot of tricky parts here and a lot of details you have to keep in mind. Yeah, so no more questions. Thank you very much. Thank you.