 Hi, my friends, I work in Red Hat in storage management team. And today I will tell you something about managing your storage with Ansible and Linux system rows. So if you never heard about Linux system rows, it's a simple set of Ansible rows that you can use to manage some common components of your Linux systems. The idea is that instead of trying to figure out what command you need to use on rail 7, on rail 8, rail 9, Fedora, or whatever, you will write a single playbook that will work everywhere. There are multiple subsystems supported within system rows. Each is a separate row, the F rows for S-Elinux, networking, firewall configuration, and of course storage. If you are interested in networking, then yesterday there was a separate talk about network row. And you can get it either as an RPM package in rail S-Rail system rows, or Fedora Linux system rows, or just through Ansible Galaxy. And if you are interested more about details about the rows in general, there's a link for our home page. I will be talking more mostly about the storage row today. So the storage row is obviously Linux system rows for storage management. And these are some of the technologies that the row supports. It supports LVM. It's a default on rail. It used to be a default on Fedora a few releases ago. That includes LVM rate, thin provisioning, LVM cache, and duplication and compression with LVM video. We support also normal software rate with MD. So you can decide whether you prefer LVM rate or MD rate. Max encryption. And if you don't like LVM, you can just use standard partitions. And of course, we support also managing file systems on those devices. That includes resizing of the file systems and also FSTAP management. So if you tell us to mount something somewhere, we can manage the FSTAP entry for this as well. So I don't have a demo because if I had a demo, it would take the entire termines starting to figure out how to make a demo work because that never works. But I did all those steps yesterday on my VM, so you need to trust me that it actually works. So this is how the playbook with storage row would look like or the part that is concerned about storage. As everything or most of things, it's unsymbol. You don't tell us what to do. You tell us what you want to achieve. So you write a description of your storage in the playbook, and then it's up to the role to basically manage the storage to the point that it looks like what you tell us. So this is example how a default federal installation with LVM would look like with the storage row. That's not very good use case for storage row because you don't do that. The install does that for you. But if you wrote the playbook like that and rerun it on your existing system, then you can do that. Nothing will happen because the system is already in this stage. So we will detect that. Yeah, you ask us to have one volume group called Fedora on the disk VDA with two logical volumes root and home for your root and home. And yeah, we will detect that it already the system is in this state and we will do nothing. So after you run the row, you get exactly this storage setup, and nothing should change if you run it over and over again. So something a little bit more realistic. Let's say we have to set up for the previous slide and we bought a new hard drive and we want to start using it. We want to edit to our volume group and we want to resize our home partition to have more space for our data. So all you need to do this under the disks where previously there was only VDA, you will add the use your second disk VDB and then under your home where it previously was something like 25 gigabytes, you just put 100 gigabytes there and we will resize it for you. So yeah, after you run that, you will be able to see that yeah, VDB is now part of the Fedora volume group and that the home logical volume is now 100 gigabytes. By the way, possibly notice that I removed the size under the root logical volume. I did it for two reasons. First, it wouldn't fit on the slide, but I can do that. If you don't write something in the storage role, then it's basically for something we have default volumes. If you decide to create a new logical volume and you don't put in file system type, for example, you will default to XT4 on Fedora XFS on rel. But if it's an existing device and you are just asking us to change something, then the things you don't write are taken from the device. So here for the root logical volumes, I didn't specify the size. So that for the role means, yeah, don't touch the size. And by the way, for sizes, we also support percentage values. So you don't need to be specific about exact gigabytes. You can write, I want 50% for this and 50% for that. So another use case. Let's say we save some more money and we bought an NVMe drive. So this NVMe drive is very small because we don't have much money. So we can't use it for our root file system, which is good because we don't support migrating logical volumes between PVs yet. It's one of our future goals. But you can use it as a cache for your root file system. So we need to edit to the volume group. So again, we will add to the disks our NVMe. This time there's NVMe namespace. And then for the root logical volume, we will say, yeah, you want this cached. So we say cached through cache size, five gigabytes. And then we will need to tell the role that the NVMe will be used for the cache. And you have an LVM cached logical volume. This now is quite messy, but you can still see that, yeah, the NVMe is now part of the Fedora volume group. And there is a lot of cache pool devices that are quite a complicated to understand. But you can see that the cache pool data device is five gigabytes and it's actually below the root logical volume, meaning that the root logical volume is now cached. So yeah. And another use case, I wanted to show how to remove actually something because as I said, if you don't write anything, if you don't write the logical volume, that doesn't mean it's removed. I, again, for space reason, didn't write anything about the home logical volume, but it wasn't removed. So what do you need to do to actually remove something? So in this case, remove the cache. Then you need to write cached force because if you don't write anything about cache, we will say, okay, you didn't write anything about cache. That means that you won't touch anything about the cache. And if you actually run this without the variable on the top, then the structure would do nothing. That's so-called safe mode. It's by default on. And by default, we won't remove anything even if you explicitly ask to remove something or even if you, for example, change the file system type to XFS here. We wouldn't do that by default because it would mean you're just cruising data. And it's not a very good idea doing something automatically in storage that could mean that you will lose your data. So if you want to be safe, then just don't disable the safe mode. For this case, removing the cache is actually safe, but it's still removing something. So you need to specify storage mode safe, storage safe mode to false. And then if you run Playbook with this, we will actually remove the cache from the root logical volume and also remove the NVMe from the volume group. So this is how it would look afterwards. You basically see it's same as before. You have your root and home logical volume and no cache. So this is like some fancy stuff you can do with that. Of course, I didn't show here the very basic features like adding a new subvolume, if you want subvolume, sorry. Logical volume, if you want to add a new logical volume, you will just put it here under the volumes. It's like name, data, and whatever size and file system type and mode point you need. And as I mentioned, the mount points are managed in FSTAP if you don't tell us not to do that. So if you put a mount point there, it will mount the device for you and put it in FSTAP. So it's mounted during boot, sorry. So yes, this is some features we already have. We are still missing some. The storage row is relatively new. Right now in the next trail release, we are at support for file system online resize, which I actually kind of showed here because I was resizing home while mounted. So I was already using the developer version of storage row and we had some customers requested adding options to specify owners and permissions for the mount points. So you can be sure that you can write to the newly created mount point if you create it with storage row. And in the future, we would like to focus on more configurations options. As I said, we started with the basic stuff adding new logical volumes, creating something new. And you would like to add some more options like this was I presented to modify what you already have, maybe like enable encryption on stuff that is not encrypted. Yeah, and also right now, I mentioned the storage technologies we support. We would like to add more. So Stratis, ButterFS for Fedora and also some basic snapshot support is planned for the future. There are also plans to create more complex snapshots system role, but people expect that the storage row would also support snapshots. So yeah, that was my short presentation. There are some links for you if you are interested in more and some contacts. And I think we have like two or three minutes for questions. If I can drive, if I don't know or want to not select the IPDA, but select it some other way. Like if I have a link in the switch there. Yes, you can. The question was whether you need to specify drive by like VDA, SDA and whatever, or you can use the other links in there. Yes, you can use the UID and ID link in there. LVM RAID we support or the, for both MD and LVM RAID we support all the nodes. So raise your hand. Any more questions? Yeah, I understand. So the question was more generic about system roles whether there is a system role that supports installing packages. Yes, I am actually not sure whether it's a system role because you have a DNF, DNF in Ansible, DNF module in Ansible supported. So you would probably use that. But yeah, I think you can do that with system roles as well but there's a DNF module in Ansible. So you would probably use that. Any more questions? Okay, so thank you.