 Hi. So I am Alberto. I work in Suze. In Suze, we know how to make installers like a tradition there. And this is one more installer, so we are going to talk about Yomi, that is a new kind of installer. So what is Yomi? Yomi is a new kind of installer for now oriented for the Suze family, so we have, like, micro-S, it's very oriented for the Suze family. And it's kind of similar to Autoyaz. Autoyaz basically is this kind of installer that you provide profiles, generally it's an XML, and it's able to make local decisions and produce an installer based on this profile document. So Yomi, one of the goals that we have for Yomi, needs to be used for parallel installations in network where we have nodes that have very different hardware configurations. So a network that some nodes are going to have a lot of CPU, memory, hard disk, and we want deployment that are different in relation with the partitioning and the software and the service that are going to be installed there. Of course, needs to be unattended. So if you have multiple nodes, you don't want to take care of them. That means that you need some kind of freedom and the system needs to make some choices for you when you are not very clear what you are talking about. Also, we want something that is simple to manage. One of the problems about Autoyaz is that XML is not easy to manage. It's not very DevOps oriented. So basically, it's hard to provide logic inside an XML document. It's kind of hard. And if you want to integrate that in something that is Git-based, it's maybe not optimal. We need something that is easy to orchestrate. And this is really an innovation in relation with Autoyaz. Orchestration is not part of the Autoyaz. And we are talking about installation where the orchestration is a key component. For example, some installations need to be before others. And some services need to be running before other services are able to connect to the first one. So we need something that can orchestrate. Something that is not a requirement, but I put here is idempotent. We want something that is not going to break our system. We have something that is working and we make a mistake. So we are admins, we make mistakes. We don't want something that is going to be broken because I try to reapply the installer there. So this is a property that we are looking for. And of course, we need something that can work alone. So maybe the installation is a single problem that you have. You have a big network and you only want to take care of the installation. But we want something that can be integrated into a bigger solution. Something that you can put this piece of code as one step in a bigger one. Typical use case of Yomi. I mean, if you have experience with OpenStack Kubernetes, you have a good candidate for the use of this installer. Because in OpenStack, you have a lot of nodes. Those nodes are different, these different hardware, because there are different roles involved in OpenStack. We have, for example, the control plane. The control plane is usually a very big machine with a lot of memory and a very nice network because it's going to be the connection point with the rest of the network. We have some nodes that are going to be computation nodes. So basically the memory is going to be big, but the hard disk is not going to be super beefy. But the CPU is going to be, of course, the feature that is going to define this kind of node. And we are going to have a storage node. In storage node, we don't care about much about the memory, maybe. We don't care much about the CPU, but we really care about the storage that we have installed there. Basically, we want to create LBM. We are going to have a specific hardware for resolving this problem of storage. We have different nodes, and of course we want different kinds of installations there. That means different partitioning, different service, and different users. So, as I commented now, we want something that can be integrated in the usual workflow that the company or the client is doing during the provisioning. Provisioning is something more complicated than installing and setting up service. Sometimes it's a very big chain of dependencies, and it's very easy to neglect the installation part of this kind of workflow. So, let's try to draw a line about how a normal installer works. I mean, if you have experience with the game 2 work, this is going to be very easy for you. But sometimes you are related, so you take jazz, so you take whatever installer that your distribution has, and next, next, next, and you have an installer system. And basically, all the installers, whatever you do manually or automatically, have this kind of a step. So, basically, the first asset you are going to do is partitioning of your devices, maybe taking care of rate LVM, eventually you need to provide a file system in the volume or subvolume that you are going to have there. If you have a battery phase, probably you are going to create subvolumes to take advantage of one of the features of battery phase. Eventually you are going to install the software inside. I don't know how, maybe you are going to copy the software from a different source or install VR repository. Eventually you are going to install users, for example, but maybe some admin users that you need to, you can provide during installation time. Of course you need to configure certain services, like the time, the song that you leave, the network, the different services that you want to run, like as I said. One of the last steps is the boot loader, so group needs to be taking care of, to be placing the correct device in order to provide the kernel to be in memory in the proper moment during the boot process. Maybe there is some kind of post-installation task. If you have a snapper, you need to take care of some bits that is not something you sell in any installers. If you have battery phase with rather only volumes, this is the moment when you need to set the flags and whatever you need to do post-installations. This is extremely easy. I mean, this is very well known, and it's very easy to be a CLI, so you have a device, so you boot there, and we are going to see how this can be done, but it's very easy. It's also very easy to express in a shell script. You can take the shell script and grow for it until you have an installer, something more complicated, more feature-complete. You usually have, eventually, one of the major iterations, you are going to take abstraction of that, for example, in JAT, we have leaf storage that is abstracting completely the problem of partitioning volumes to volume file system. These are very complicated problems, something that we can abstract, but abstraction usually provides limitations to the thing that you can do, and this is something not very nice, because when you have a different kind of installation, that is something that in SUSE we have, those abstraction are broken. So let's talk a bit about how it's a typical installer. So in that case, imagine that we have a server with two devices. We know that we have two devices, SDA and SDB. So the first thing that I need to do is, of course, have a USB stick or a DVD where I boot my operating system, and somehow I have a CLI and I'm able to see my devices. So the first thing that I need to do is to create the partition label. In that case, we are going to create GPT. We don't want MS-TOS or anyone, so we have GPT. So the next step usually is to create partitions. So we are not going to take care of rate, because rate can live sometimes without a partition. Sometimes it needs to be the firmware who is going to take care of the setup of the device, but this is not happening for us now. So we are going to create a first partition that is going to allocate some space for a group. So, you know, a group maybe needs some more space. Let's create a partition for it and set the proper flag for this initial partition. We are going to create the swap. We are going to create the root file system. And in the second device, one single partition, and we know that we are going to use that for battery phase. Next step is usually the file system. So we have four partitions, but only three need file systems. One for the swap. For the root file system, today we are going to choose X4 and battery phase. Eventually, because we are using battery phase, I decide that I'm going to create the first subvolume. Maybe later I'm going to create a different one, but I'm going to create the first volume. So I need to mount the device, create the subvolume, and I'm going to be a good citizen and mount the subvolume. The next step usually is create the file system tab. This is something very tricky. The creation of the file system tab is not a single step in installation process. It's something that is going to be done multiple times to an installation. So we are going to create the file. Of course, first we need to mount the device, create the ATC directory, create the file, make sure that the proper lines are leaving there, of course taking care of the UID for the battery phase because maybe it's very picky. Okay, this is... We have the initial prototype of the file system tab. Now it's the moment of the subinstallation. I decided today that I'm going to use SIPR, the packet manager. Whatever you use, Debian, DNF, Joom, SIPR, there is a way to... the packet manager is going to create a CS root for you inside the device. In that case, the slash-slash-root parameter that we can see here is going to... If you have a repository registry that is one of the first operations that we do, it's going to take the package and... taking care of the CS root environment and install the software inside the CS root. For today, we are going to install a pattern. The pattern is a collection in the SUSE Parlango. It's a collection of packages. We have to take the super-enhanced enhanced base system. The kernel and group-tooth, so it's going to be very minimal. We are very close because we now can install the bootloader. The bootloader is going to be very tricky because we really know now need the CS root. The first thing that we are going to do is do the proper months to provide something that we can leverage via CS root. Now we can create the initRD. I know that the package from SUSE is going to create the initRD, but I'm not sure they state that it is. I'm going to create now the initRD I'm going to make some small configuration in the group ETC default group that is a file that the group mkconfig is going to read to generate the configuration file. The last step is make the proper installation. You can see that every command is prefixed with CS root. This is what makes this step a bit tricky. Now the reboot and have a proper file system running. CS root in that case is a user to make sure that the default target is loaded via systemD and system city reboot. The funny thing is that if you take those steps and you copy and paste you are going to have a bootable image so it's not very complicated. But now you can see that there is a lot of to-do commands at the beginning and this is something that you need to take care of when your system is going to change. You are going to have something that is going to be a bit different. The thing that of course that we miss is that we don't have a way to parameterize anything. We know that we have two devices so okay, I accordingly to what I have. We know the kind of partition that we want but we don't take care about the size, we don't take care about the ordering. So we need a way to specify if we have different kind of installation or discover the partitions, the kind of file system. If we have it being already, the situation gets more complicated but you need a way to parameterize to indicate how this profile is going to be there. If we are going to use battery phase you need to indicate more sub volumes and maybe a prefix for those sub volumes. If you are going to use a snapper you also need to indicate at the fold sub volume that is going to be used when you mount this device. We completely forget about Securewood UFE. We forget about users, of course service, we only take care of one target but that's all and we take care manually of the CSRID environment. All those components are missing and you can provide that with a better script using maybe a real programming language like Python Ruby or whatever. So the proposal that Jomi is doing is let's do the installation in a different way. We are going to use a configuration manager system that in this case is sold. I don't know if you know sold but so this is a configuration manager system. It's kind of like of Chef of Ansible at least in principle is. They are both taking care of the same programming space but the architecture is completely different. It's something like a Puthlase. You can play a lot and you can have very advanced configuration options there. The default option is to have a master and a minion so we are going to have a node that is going to have a service that is called master and it's going to take care of controlling the different minions that are living in the different device of your network. They have some advantage but you can optionally remove the minions and optionally remove the master and you need to eventually one of them. You can have very funny and crazy architecture based on reactors so you can listen in some kind of event so if one of the node is changing the configuration of doing something crazy the master can capture some event and react based on the kind of event that is happening. Maybe you want to shoot down a service and restart or you want to change some parameters of the configuration. So it's a nice piece of technology that is able to do great stuff. Of course, they have his own kind of concepts or words or jargon that is only alive in the sold realm. So they have the concept of grains, pillar, execution mode, state mode and sold state but maybe it's more easy to see that here. So we have a typical master minion configuration. We have a node that is going to have the master and we have three nodes that contain the minion. In the minion site at the right, you have the grains. Grains is the minimal data that the minion is going to export to the master. I'm going to publish this minion is going to have or it's currently having this CPU, this amount of hard disk, amount of data, this network, MAC address, whatever whatever. It's very basic information that the minion is publishing. On the other side, we have execution modules. Execution module is like a small action, a single action, basically done in Python that is going to make something. This something is whatever you can think of. For example, you have a model to start a service. You have a model to remove a file. You have model to create directories. You have model for whatever elements. And it's only doing that. If managed to do that, it's going to return true. And if it fails, maybe you have an exception or some nice report about why it fails. On top of that, on top of this basic element that is an execution module, you have an state module that is again a Python code that is going to leverage several execution modules in order to reach an state. And this is super cool because the concept of a state is the concept that is going to make Yomi quite different from the rest of the installers. And a state is going to guarantee that certain configuration is already in place. If not, it's going to make decisions in order to reach this final state that we want. We have also states that cannot be confused with state models because states are something declarative. It's a YAML document that the user or the admin describe that is going to put in order different state modules. So you are going to first make sure that the Apache, for example, is installed. There is another state that is going to be sure that the Apache service is going to be running. That the server directory is going to be in place. That the index file is going to be there with the proper username and permissions. So those declarative documents are the ones that are going to orchestrate those execution modules. We have the pillars. The pillar is the data that the states are going to use in order to have a proper configuration. For example, we can have an state that is going to be sure that Apache is going to be installed, but Apache can have different names in different distributions. Be a pillar, I can provide all the names that this package has in my different distributions. So we have for one side the state that is going to be sure that certain state is going to be there, and a pillar that is the data that the state is going to use. And between both sides we have a communication module. And this bus is the channel of communication between the minions and the master and can be used to deliver events. So if one of the nodes is doing something, I can find an event that is going to be collected by the master and add accordingly. So we know what execution mode is. The execution mode is the unit element that is going to be there. The state module is the piece of code that is going to guarantee that an state is reached. So it's going to validate the input, take care that it's going to check the current status. It's going to make decision about the action that needs to be done in order to reach the final state. If you don't provide the test parameter, those actions are going to be met to be done. And if you don't provide the test parameter, you can test it and compare it with the plan. And if there is some kind of difference, maybe some problem happened. Something very nice about this proposal, if you are able to express all your problems in this kind of semantics, is that you can fix a wrong configuration reapplying your state. And as a side effect, that state, nothing is going to change. You don't break, you have it important. Again, the state of the declarative document and have this kind of shape. So in the first example, we have a way to express that a device is mounted. You can see that we put mounted. So we are going to guarantee that the mount point is going to be in place with the proper permissions that you have. And the partition can be mounted. And the file system meet with the state that you have. If everything goes in place, this state is going to be executed. If not, he is going to try to make the missing elements in place in order to guarantee that again you have this device mounted in the proper place. You can have something more complicated like prepare K exit. You have this command run that is that you can put random shell script there. This is very bad behavior, but you can do that. That the status of the check if the state is meet or not can be in a different line on your declaration. And the very nice thing is that this declarative document can be enriched with template file. So you can provide logic on top of your logic. So it's like a macro. It's developer and you can provide macros there. And you can make decisions based on the pillars. So the data that is the pillars can be read from the state and you make decisions inside the Jamel document to hide certain elements of your document or to show another one. So you can using the pillars and the grains provide a different description of the state. So the final element is the pillar is that it's also intelligent. You can enrich traditional Jamel document that is something three-aligned declarative and very flat and plain with a template. So that means that you can dynamically change your data based on the description that the minimum provides to the master. So if a master have a different kind of ID, so every master has an ID, you can change. For example, the file system you're going to apply. And this is extremely powerful because you are able to inject your old grains. So grains is basically a Python code that is executed and to reach a result in an MSPACE. And you can query that from your data. So now the idea is super clear. We are going to make an installer that is a sold state. So the master plan this is like my master plan here. For each basic installation we are going to make sure that for basic action we are going to create an execution module. Because sold is a very big project, probably we have the action that we're really in place in the repository in GitHub. If not we are going to have fixed bugs or whatever. The next step is that for every high level action like for example mounting a device, partitioning a device, creating a user we are going to make sure that we have an state module that is going to properly do the action that they want or reach the state that they want to be reached. So again, we reuse and we extend that and reach in the sold repository. So sold is an open source project and we are going to contribute the missing parts that we need in the main repo. And if there is not there, nothing there that we need, we are going to implement such a foreign scratch and tie to upstream that. After that we are going to take care of what is really Jomi. That is the SLS, the Jamel file that is going to reach the state. We are going to provide a way to parameterize all the data. The data of course is something that is a responsibility of the user but we can provide some example for that. And we have Jomi. So I have a small demo. This demo is a two-note installation and I try to be wildly different in the kind of installation. In one note we are going to have a BIOS machine and we are going to install micro-S. If you know micro-S is a transactional update operating system. So that means that we are going to have battery phase in a random way. And we have a second note, in that case it's not a BIOS, it's a UFE note. No secure root is going to have two hard disks. And we are going to use a LBN. Battery phase for the root and XFS for home. This was like the traditional tumbleweed was installed with this configuration. Now it's everything battery phase. So let's try to do that here. I have the demo in my repop. So basically what I did here is that I have a nice script that is going to boot two BMS. So we have two BMS. Those BMS are the hard disk completely clean. There's nothing there. And I boot the BMS using a small tumbleweed image. It's really minimal. It's a life image. And the only thing that is different from a normal one is that it contains the soul minion. Actually this is not even what makes different because tumbleweed provides the soul minion by default. So we have the soul minion. We have a soul master in somewhere. So we have the soul master here. And we make sure that we can see from here we can see our minion. So we have two nodes and both with a pin return true. So yeah we can see that they are there. We have pillars. And we have two pillars. One is micro s dot sls that is going to be applied. This data is going to be read only one node. And we have a different data that is going to be only read for the node. So you can see here that we have like a section for configuration. Obviously we need a way to define the partition. So we describe the devices that we have and the partitions that are living there. We have this type. We have this type. In that case it's about how this partition is going to be used in that case. We say F partition or LLBM. Because we have an LLBM device we are going to also provide all the information for the logical volumes that we expect there. We can use the same kind of parameters that are expected in the CLI tools that LLBM is going to use. We have a different section for the file system. So we can refer to a partition or a logical volume that is for the pure point of view is different. It's only data. We have a very complicated schema for suit volumes with a prefix. We have crazy stuff here. We have an XFS file system for home. The boot loader we require that is going to be in one of these devices and we want these partitions there. And this is something kind of similar for microS. Again we have a single device here, the file system. Those are suit volumes very specific for the microS installation. We have parameters like copy and write. We have the boot loader maybe is a bit more complicated. We need to provide some parameters that the group loader is going to require. The pattern is always of course is going to be different. Yes, we want of course one service that is going to be running that is a Sol Minion. The nice thing is that when the machine is going to be installed and it's rebooted, the Minion that is running inside the new machine is possible to be connected to the master and with the job mistaking care of copying the certificate and everything in order to make of this process transparent. So we can provide a height state. This is going to use the network and we have a monitoring tool that is going to capture those seven that I was talking about and show the status of the installation. So meanwhile, because this is an installer, let's finish the presentation. So, now everything is more clear. We have Python code that is living in the solved space and we have a YAML declaration file that is what is the JOMI problem. We upstream all the code. The current status is that all the key components are living now in solved at this in the developed branch. So you can take this branch and JOMI is going to work. So we take care of fixing all the bugs and missing features for Parted, for Ceper. We want to plan to do that for different package managers like the Debian or Red Hat. We provide very nice features and new tools about hardware information. Of course, battery phase needs a big revamp in AppStream. We used it quite a bit. So it's normal that we require more from different distribution. We provide a very nice CSroot module for taking care of all the dirty details of CSroot. We have a very nice module that is freezed that is freezed for you. So you can take a picture of CSroot, install garbage and recover the previous state of the CSroot using the package manager. More crazy stuff that we do. Eventually those declaration of the state mean while you are providing more features, the tree of the state is going to go deeper and wider. So we have a module of two or three months ago. Now the tree is bigger. But it's taking care of all the state that we saw in the first slides about the installation and much more. So we provide the partition that is obviously the more complicated part. Have different ways of operation. This one that is using linear programming is able to make decisions for you about the size of the tree. So this is a very crazy stuff already in place. So far it's one of the tricky elements of here is that you are a state that is composed of different states. And composing even for a state is a bit tricky. So there are some challenges on composing states. For example, we have a lot of crazy stuff. We install open-source like crazy, we have kubic. It's still not at the same level that just, for example, missing part of the size of the partition. There is a plan to provide this feature, looks, maybe the configuration can be better. But we have already a nice CLI tool that you can see, okay, we have a new node in the network. Please install Tumblr with and use QBDM for the rest of your, for the rest of the action that need to be done in order to install kubic, sorry, Kubernetes. Everything is upstream, everything is open-source. All my contribution to SOLT are possible to reach from SOLT stock. I'm just leaving the open-source name spacing GitHub. And yes, all the package and all the image are in OBS at your disposal. Eventually this, everything I have to say, I don't know if anyone was using SOLT before, if you have any question or comment. So, thank you. Yeah, maybe I have a question. So, when you use this configuration to install the machine, are you able to use it for configuration? Like, for the long time configuration? Once you put all the configuration to install the machine, are you also able to modify it later? Yes, of course. This is an state, so if the state change, for example, if you provide new patterns that you would be there, new users, yes, you would change in the proportion of your change. There are always limitations. For example, if you, today, we are not supporting resizing of the partition. There are complications about the resizing. You can break stuff. So, this part needs to be taken care of. But everything else can be done through the installation or post-installation. Something very neat about Jomi is that because we have a way of re-reboot and we can inject the state inside the CS route, you can go super far in the configuration, in the provisioning of your node, before you have the reboot. So, imagine that you are doing starting service inside the CS route, sending commands inside the CS route. I don't know, like half almost full open stock or Kubernetes installation can go very far there. So, that means that if you decide to re-apply the change in the current installation, those changes are going to be propagated. In that case, not inside the CS route, but inside the real machine. Okay, question. Maybe I missed something, but this is like a chicken and egg problem. You have to have the salt agent first to make this all work. So, how do you do this? So, we have the salt master. It's a node that is going to be there, can be your laptop. So, imagine that you have Pixi boot in your laptop. So, you can inject the Kermit, NITRD, RUFI system that is going to contain the salt minimum. You have another option that is the one that you set here. You have a DVD or USB stick that you plug into the machine and boot from there and you are going to need to break this cycle. It's a kernel, a run image and a minion. It's the only thing that you need. You can provide whatever you want. A stick, a USB, Pixi boot, be it a firmware, whatever you want. So, you break the cycle injecting a minion, whatever mechanism that you have. No more questions then? Thank you, Alberto. And we have a short break. Before the next talk, we'll be taking a break from this.