 This presentation will be about basically two parts and one mineral, plus if we have time to get there. Basically, the main part will be about our modularity in Anaconda. And the second smaller part will be about what changes are in the federal 33 in the Anaconda. It's not that really... Oh, I will get to that. It'll be a lot better. So, something about me, I'm Yiri Konechni. You can call me Yirka, as most of you already do, because it's much easier to pronounce. And the main point, I'm working in a red hat. I have my hat here. And I'm working in Anaconda for my all career in red hat, which is five years now. And the main contacts I want to give you here are these two, which is SHARP Anaconda on 3.0 IRC and Anaconda Development List mailing list. If you need anything about... If you have some suggestions on how to make it better or do you have some questions about creating your add-on or anything, just ask us there. We are friendly people. We don't bite, usually. So, first, I had a presentation about Anaconda modularization, and it was two years ago on Defcon. So if you weren't there, I will make a short introduction about what it is. And if you weren't there, it will be at least a reminder to freshen your memory. Yeah, so first Anaconda modularization is not connected with Fedora modularity modularization at all. By the way, this is slide from Fedora Viki, I think, or something like that, explaining the modularization. And it seems quite complicated. I guess we have much simpler state for that. So, what's Anaconda modularization? What is it? Basically, what we are trying to do, and what we already... Most of it, I would say, or at least half of it, more than half of it already done, is basically to split monolithic design we had before which were basically everything in one place. There were some kind of backend, but in the same code structure, GUI does some logic for a GUI, but also something which should be done in backend, and the text UI basically the same. It was more or less baked together. The problem is that you have had time to change something and see the results of your change, or what everything it should. It could modify or change where it could really change the behavior, which is not really good, and we had a lot of problems with that. So, to change it to something like this, which is basically to split parts of the Anaconda into modules, and modules are single units which will basically give you API to control view state and everything about a feature set, something. So, it means time zone module will, for example, give you dates and ability to set time zone, NTP servers and similar stuff, and so on. The interesting one is here is... Yeah, and basically all the modules will connect to each other by the bus, and they will provide stable API, which could be interesting for you if you are writing your add-on or something like that, because you will basically give the same set of possibilities we have. We don't have any other way how to communicate with modules. Okay, you can always create something like file, which will give you some information somewhere, but you don't want to do that, and we are not doing that really. And all the modules have stable API and communicate with themselves. The same works for UI. Basically the text UI, graphical UI works just logic for the UI, nothing else, and it will give all the data to the module, which will then execute what's necessary and provide results to the graphical or text user interface. What's interesting here is a boss. The boss is the main module, I would say. It's always there, and its main responsibility is to style up all the modules and provide them kickstarts, for example, and grab all tasks for the installation, execute the installation, and so on. This is basically something like the main point, which will manage all the modules and everything, which is not specific to one module, I would say. I hope it's understandable. One thing we give ourselves as a rule, basically is a goal, is that we would like to change everything without user noticing, without no user visible changes, which basically means that we did plenty of that already. And if you are using Fedora, you may notice that we moved, for example, password and user spoke to the main hub, but there was much, much more we already did without really no visible changes, which was the goal, do not interrupt user workflow. And the reasons why, why we wanted to do the modularization. So one main aspect I would say is the development and maintenance ability. As I said before, the original here, his structure, or original structure of the Anaconda was pretty spaghetti, really. It was that some logic was done in the UI, some logic was done in the backend and so on, and it was hard to spot where you should change stuff, but we were doing a bit better regressions thanks to the point that we weren't sure if this change will really change in the behavior. Unfortunately, because of that we were trying to do as small changes as possible, especially for REL, but for Fedora we said to ourselves that we don't want to do that. The main reason why I'm telling this is basically that we are really trying to split the code parts to minimal units, which will have only one thing to do, which they are created to do just one thing, ideally. And I think we made a great progress on that, but you can look on the code base if you need to create your add-on or anything and see yourself. Another thing, which is also in the first point, is basically the maintenance ability. We've created a great test coverage for the new code. I think we have something like 70-80%, which before it was maybe, I don't know, 20, I guess. Not sure, really. And that's not only useful that we won't break the existing Q-scases during our changes, but it's also a great speed-up of the development. The problem with Anaconda is that when you do your changes, to test them, you have to create an updates image, provide the updates image to the virtual machine, some basically upload it on some server or something like that, and then boot your virtual machine, test the use-case. Great thing is testing the live DVD because you have to type your URL all the time you are rebooting. And then basically test your changes, and if it doesn't work, you have to do it again. And this is again and again and again. We can usually have three virtual machines testing three use-cases at once. And because of that, we started with a different approach. Basically, we will create tests before we will try these code changes in the virtual machine. We will try doing test-driven development because in the Anaconda point of view, it's much faster. It's much faster to create the test and just adapt your code to work on the test than to try the code in the virtual machines and which also give us the side benefit is the great test coverage. Yeah, another thing which I already pointed, one of the main points also is the stable API. We don't provide it yet, but the most of the API we have is stable, but we don't want to give you a word on that because we don't really migrate it everything. So it could change because of some logic we don't see yet. But it's not happening really, it's very rare. And most of the time it's not something that hard to fix the stuff. And another point which could be pretty interesting for some organizations and mainly other distributions who based on Fedora or even they don't have to base on Fedora, they just have to provide some code and use parts of the Anaconda, is splitting UI from backend, which what it means that basically thanks to the debas, you are able to write your own backend which will communicate your own UI which will communicate with our backend. And it could be even that you will create your own modules as add-ons in the Anaconda point of view and these will provide you logic which you need for your UI, which is missing in the original parts or you will give us a file as a bug or tell us that you need this feature and if it's meaningful for us to implement it or it's easy to implement it, we will most probably won't have a program to do that. So the current states where we are right now, I will show snippets of code. I hope it will be a little bit, maybe a little bit more for the advanced like people who are familiar with this, with how to write add-on in Anaconda. Sorry for that, but I think it's the best way how to show what we achieved. So the current state of the modernization is basically main part is modules and modules are split to two parts, I would say. One is interface which will provide you the stable API and it's really an interface in our code. We have just file which will provide you interface. They can be inherited and so on and so on, but they don't have a code, really. And the second one is implementation. This is a screenshot of a code of one of the interfaces. Basically we have the logic, that's the logic for connecting the signals. It has to be there, sorry for that. But other than that, it's just you will look at their screen payload which is really a D-Bus API method and it takes payload type which is string and it gives you object path which means path to another D-Bus object. And that's basically everything. And we are also writing their possible values which are supported in this API. So that means that you will, when you are trying to find how to do something, you just have to look on the interfaces most of the time. We are trying to, if there's something missing some information or anything, tell us, we will give it or create a pull request that's even better. And that's one of the benefits we are trying to give there basically to be able to understand the code very quickly or understand the API very quickly. We want to use these interfaces to generate our future documentation. That's the plan right now. So we are trying to have it as self-explanation as possible. Another piece, important piece of puzzle, piece of puzzle here is our tasks basically. What task is, is a small unit which should do just one term section in the installation environment. And transaction is maybe not the correct word but basically what it does is that, for example, I want to set specific date. Okay, specific date is not the correct example because it's really easy but everything which takes a little bit more time, for example, I want to install the NF package set. The payload module will create a task and this task can be started by the, sorry, and provide path to this task. And the task can be started and basically read the status and everything on the opposites, on the debuts. Anything, you are your controller, you will write or your own UI. We'll basically get this path to the task and can start the task and follow if the task went finished and what's the return called basically. I think this is great improvement because it forces us to write the task to a separate, separate code basically. When we want to do any logic which is a little bit more complicated we should create a task and we have to basically because without the task we would throw our debuts API which we don't want to. So the task are somehow must have from our point of view but on the other hand it's also great enhancement of our code or great important detail for the reliability of the code because from the most of the time you just need name of the task and you know what this is doing and you can run the task when you are ready to. Another interesting point which we did already is configuration files. Basically before that we had the left side which were installed classes. Installed class was spiting code which would describe your product, your variant which means Fedora Workstation have default file system XFS which is not true anymore but it was. And it will give you default auto partitioning behavior and set how the UI should look like and so on and so forth. This is great. It gives you great power but on the other hand it also gives you a way more ways how to break the code how to use something from Anaconda which is not taken as an API from our site and similar. Also it's hard to have these in the new solution because where it should be if it should be in the bus then it has to communicate with all the modules and it could be really hard for users to understand this. So we took that and basically job it and created a configuration file. The configuration files are on basically three places which is one is the default one second is basically it's configuration of products and variants by the way product means Fedora RL for example or Centos Scientific Linux and variants mean workstation, server, container etc. The configuration files are pretty easy to and the last place which you can provide your own configuration file in the installation environment and it will refrite any other it has the highest priority. So you can basically if you are a geek who like to change tweak installation environment and have its own installation image you can create this configuration file to just change your default file system and use this ISO. No problem with that. It should work. Of course it's not tested out of the like we are not testing each settings and all the combinations there's like plenty of the combinations so there could be a bug but it basically support it. So if you have a bug there just file it on us. One interesting detail is inheritance basically one of the benefit in the install classes was that Fedora Coro Fedora of Silverblue inherit Fedora workstation and change just some of the values. We have the same logic in the configuration files but it's not in the Python solution like really inheritance but you will set the base product and base variant and it will provide it will make like the bit mean step which will load the configuration values from the defaults then override them by the base product and base variant and then override them by your product and variant which is really pretty simple to understand and use. It's much less error prone than the install classes and it's pretty easy to modify. We basically the battery has changed to switch to the default battery as it was more or less one line in the configuration file. There were more stuff but it was more of the fixing stuff but the change like switching to something else it was really, really, really pretty easy and most of the plenty of the last changes we did from the community like still system by changes and self-contained changes was really done by the change in the configuration file because most of the time they just want to change the default they don't want to add something it's the minor request Another interesting point is from the add-ons I think we improved and add-ons support a lot however because not all the modules are yet migrated then you can have a difficulty with it but the plan is to give a module on the basically same level as the modules as the modules so they will have the same power and same possibilities to modify your system or to read things from read stuff from the modules so the benefits are of course stable API one of the benefits which could be pretty interesting for some people is you can use a language of your choice we don't force you to use Python it's just you just have to have support for debuts and that's everything we don't care really if you are using our API from the point of view of your code but on the other hand don't really expect us to understand your code if you use something like I don't know Scala or anything which we don't understand and the last point which is unfortunately I would say Python only but for the really obvious reasons is that you can use our tooling we are using in Anaconda and we specifically created a Python Python module basically the directory with the widget in it is the Python module in Python but basically we are thinking about maybe even packaging it I'm not sure if that gives any sense but all the pieces of code we are using and it could be interesting for add-ons you are free to use and it's for example that you can just use our storage constant to get proxy proxy for the debuts object above like behind it and it's pretty useful and it's helping us a lot of it's helping add-ons a lot so you can simplify your workflow if there is something missing and it could be interesting to us just tell us and even if you don't know if it's interesting to us because you don't have to know we will tell you so and now the interesting part what was already implemented as you see plenty of the modules are already on the debuts the only missing part is payload and it's partially partially on the debuts so I will go with just quickly by the list basically you have the localization so you can set keyboard and your language by the module it's used already like Radik from our team made a great work there and it's pretty I would say it's really easy to use the module when you take into account how complicated is the whole thing and one note about this we are trying to have minimal API which means we don't want to anything which won't be used it doesn't mean that we won't provide you anything more we are just requesting your your heads up, your info about what you are interested to have we have even now few notes from our existing add-ons which we are trying to modify them to not break them basically and it changes yeah so the network then is the security, basically the Outselect Outconfig Selenux stuff services you are able to enable service stop service, start service not sure really you are able to start and stop service because it doesn't give much much sense but it has minimally it has the API which is provided by the Kickstart then we have a storage and I really really have to have to give my respect to Vandy Ponsova who did this work because moving the storage in Anaconda to a module is a heroic task because as you know we have custom configuration spoke, we have auto partitioning and the logic is all everything in the module and the module is created in a way that there are viewers it has plenty of interfaces basically you have to you can use the interface just to change your FCOE you have to you will use interface just to use the custom partitioning you will use another interface for auto partitioning it's really nicely separated and it gives sense even for looking for the device tree what devices will be or how the partitioning will be will look like after the installation and similar stuff there's plenty and plenty of code moved and reworked and it's working pretty great we had a minimal amount of bugs which I was pretty surprised when you take into account how how giant piece of code is this so the storage is great really another one is time zones at your date time as I said before NTP servers and so on and another one is a subscription which is which is work from marking command from our team and it's mainly you are mainly it's a great great great module which should be which should have been there even like two years ago but it wasn't it's patting neural and it's mainly for else so sorry about talking about this about that too much if that's the case but Martin did a great work on that and it's there was a really big amount of iterations and tweaking to make it work and finally it's there and another one is users so create your password create user and so on I don't have payload here but it's for the reason the reason is that I have a separate slide for that because it's not really basically the payload I think is the module or most of them is the module that's called which was the biggest thing in like way how to change something and not break something else and we wanted to change it so we've basically changed the whole way how it work and this is more or less in the way how we want to have it because it's as you see it's future plan right now we just have implemented some kind of sort support but it will change basically before it was one file I would say for each payload which means when you set when you installing core not core sorry silver blue you had rpm os3 payload and when you install it was installing server you had dnf payload and you were installing live image you had live os payload and that's even one special for live image from the key start not live image before live dvd sorry dialogue dvd is live os and live image from the key start is live image which kind of share code the problem is that basically the dnf payload and all the payloads have to take care of all the sources and that's the biggest pain because there were shared code not called but shared environment with the drake code so what we did in the drake code we used in the in the in the code in the dnf payload basically yeah I think it was only the dnf payload and the problem there was that it's really hard to spot why it's behave this way and even then you have something like you have a method and you have free ifs with explanation if jcode mounted repository here or if jcode mounted repository here at stage 2 here and it's great really great to debug something like that so and you don't know why the drake could do it does it of course or done it sorry of course so we took a totally different approach and this is more or less really really an idea how we want to work how we want to have it working but we don't know if if there won't be any like modifications because could be we missed something so basically there will be payloads as the main module which you will contact you will use it to create your own payload and you will get it as a debas path and then you have the payload and the payload have one to n sources and the source should be created by something different from payload so payloads will basically give you provide you a way there should be the graph should be a little bit differently I guess because there should be something which will create a payload and something like the source creator and the source creator will give you sources which you will basically attach to the payload I hope it's not that complicated and somehow understandable but the logic is as you see not that trivial basically what we are trying to do is to move all the setting up the source into a separate unit separate part which will which will basically payload will just tell all the sources to just setup and provide me paths where I have mounted what I can do that I can use to start the installation and in the nfs iso nfs iso example it will the source will mount the nfs then look for the iso and if there is an iso it will mount the iso and provide the path to the iso which could be very complicated because even we support without the iso with expanded install tree so it could be just like the nfs directory and so on so so it could be really not transparent and all the mounting and stuff is happening just in the directory specific for the given source so they don't like fighting for the same directory as before I hope it's clear so our other future plans for the modularization is basically to weaken dependencies between modules so we are thinking about packaging modules as separate packages not sure if we go that way or really or not because it could be more or less another word for us when no one asked for that but if there will be any like interest for that we will look on that and we are thinking about it even now so we will see another interesting piece of code will be dynamic sorting of installation tasks basically when the installation starts every module has a method which will provide you installation tasks not you but the boss basically the boss give all the modules information that the installation will start and collect all the tasks which have to be run in the to make the installation complete by the way we want to support we want to support payloads multiple payloads and they will run sequentially so you can use this design basically the same way that the installation tasks will be retrieved from all the payloads and run one one based on some priority or stages we don't know it exactly how we will do this and another one is that we want to but we are not closing to that yet because we have to finish the mobilization first but we want to create a web UI basically built on the cockpit which will which will provide you easy way how to do a remote installation without any VNC VNC viewer or anything to provide you for UI and we want to give you more possibilities for another one which was more or less already already set we want to give we want to give you a if you need something just tell us and we will add it it's not that we are not we are close to extending the IPI it's just we don't want to implement something which is not needed so now the second and I have as I expected I have 10 minutes not even 10 minutes so the second part which will be just a quick list about what you can expect on Federal Authority 3 it's not about mobilization anymore because mobilization is again done in the back end and we are trying to not really change visible stuff for the user but this was this was changes system-wide changes and self-contained changes basically to our system-wide I think all of them everyone which were proposed by someone else outside of our team and just done in the Anaconda so you will see the change behavior that's the reason why I have it here that you can expect it this from Federal Authority 3 and we did not do like nothing we did some work but it was more or less side testing and there were some work on the one of the changes I will tell you about that so the first part and most interesting one is the battery FS by default it was workstation, a working group work mainly I have to I have to give my respect my gratitude I guess I'm not sure it depends if you like or don't like the battery FS to Neil Gompa and Chris Marffy I know there were more people who were working on this but these two are more most involved in the installer side so I'm not telling like these are the only one who work on this change that's not true at all and basically it was just one change in the configuration file and some fixes of course and one a little bit older new is that you are basically able to boot directly from the battery FS sub volume and it was really a work of Neil Gompa who make who make a great amount of work and pinging us and so on about making this happen and the last thing the second one thing about the storage is that there won't be any disk swap by default you can create your disk swap still but you have to do it manually in the custom partitioning or derivative way in the installer and then you will also the high hibernation will work for you if you create your swap if you want if you use just the auto partitioning you will get swap on Zram which is basically there's a unit system the unit will generate this swap file on Zram for you from federal it works really good I already used that on federal 32 I was interested in how it works and another one is NTS support which basically protection of protection of your NTP communication in the middle attack it works on some secrets and keys exchange not really I don't really see into this but I have to thank Miroslav Lihvars because he helped us a lot not even implement this feature but also enhance our existing solution so we could like remove plenty of lines I don't know one one file even more maybe the whole file of changes and just switch to just simpler solution thanks to the fact that he pointed out to us on this so thanks a lot for that and the last part of this presentation just the big one is a little bit marketing from my side because we've created great libraries and during the modernization progress process and I would like to make people to like but I would like to you tried it because it's a great solution for plenty of words so basically I will just make it quick Dustbuzz is based on PythonDBuzz we had a problem with the maintainer because he's not responsive there there's a plenty of pull requests a long very long time without any reaction from him so we've created something something new called Dustbuzz it's just pure Python library to coordinate with DBuzz with a lot of lot of features and I think it's much much better than unusable in many cases than PythonDBuzz one of the main aspect was I think there's not possible to call a Synchronous tasks in the Python U.S. or Synchronous not sure one of these easily so this is one of the main points but there's plenty of more even the interfaces are part of Dustbuzz library that it supports it you don't have to use the features it's on you it's pretty stable right now we did plenty of bug fixing or Vandy is creator of this library from the from the beginning Vendula Konsova you can find her on RRC if you have any issues with the library file an issue on GitHub or ask her or in Baxila she's really responsive and the second one is simple it basically takes you from Anaconda we extracted this to a separate library I'm the maintainer there and it's finally LGPL3 Plus I was able to make the change from GPL2 because I don't think it's appropriate for this library and it's finally released on PyPy so if you want to create simple simple text UI for the line-based devices which means even the console then this is a great library for that and that's all from my side I provided a few links here with the contacts on us and also our block that's everything from me I will look if there are some questions yeah you can customize the files yeah Michels is asking if they can customize the defaults without using a kickstart or automating everything yeah you can you can create your you can change the defaults the only drawback is that you have to get the file into the ISO which is not the problematic on Federer especially if you you can basically inject it there but for rel it could be some problems with basically signing the image or similar thing so you have to take it yeah yeah yeah Lorax can do that it's more that it's not doable it's more like licensing could be problem with the rel yeah you can inject also the kickstart I think it's supported even though I don't think it should work when L9 branches this will be all modular you mean modular like Federer or modular like anaconda modularization I guess the anaconda modularization in that case I hope so we would really like to have we would really like to have all the modules on the debas by Federer 34 but I guess there still won't be features like the dynamics task sorting and so on because unfortunately there were plenty of changes in the Federer and there was plenty of work in the rel right now for us so we didn't have time to work on that in I don't know last few months so I hope we will get to that soon and we will finally finish the payload I wouldn't expect to be able to get like the task sorting and everything but even without that you will be able to make your add-on and communicate with the debas API for all the functionality because all the modules will be there it should be modular on debas with not everything there as I said the task won't be there the sorting and maybe the boss won't have all the functionality but all the modules should be there and working just in the modular way in Federer 34 I hope I don't want to promise anything because we are really swamped by all the work right now ok I'm out of the time right now so I will leave you but thank you all for the listening and I hope you like my presentation if you have any questions and I think just ping me here I will be on some sessions or somewhere I'm not sure ok bye